xbrlapi-developer Mailing List for Java XBRL API implementation
Brought to you by:
shuetrim
You can subscribe to this list here.
2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(1) |
Nov
(2) |
Dec
(2) |
2007 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(1) |
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
(1) |
Sep
(1) |
Oct
|
Nov
|
Dec
(2) |
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
(4) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(6) |
Jul
(7) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(2) |
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2013 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
(5) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Justin L. <jus...@gm...> - 2016-03-04 12:30:33
|
That's what I was thinking, but I wanted to make sure. Thanks! Justin On Thu, Mar 3, 2016 at 10:52 PM, Geoff Shuetrim <ge...@ga...> wrote: > That test case should not be part of the core XBRL API module. It should > instead be a part of the XDT module testing. I will try to refactor it > when I get a chance but in the meantime it would be safe to move the test > case to the XDT module yourself or just delete it entirely. > > Regards > > Geoff S > > On 4 March 2016 at 14:36, Justin Lottes <jus...@gm...> wrote: > >> I'm sorry, I don't think I was clear in my previous email and subject. >> I've downloaded the XBRL API source and am trying to build the xbrlapi >> module library when I am getting the compile error. >> >> DOMLoadingTestCase resides in org.xbrlapi. It imports LoaderImpl from >> org.xbrlapi.xdt. This import is failing. How can I build the xbrlapi-api >> library (which depends on xbrlapi-xdt library) when xbrlapi-xdt depends on >> xbrlapi-api library? It seems like a chicken and egg thing. Are you >> saying I need an old xbrlapi-api.jar in order to build a new >> xbrlapi-api.jar? >> >> On Thu, Mar 3, 2016 at 9:51 PM, Geoff Shuetrim <ge...@ga...> wrote: >> >>> Hi Justin, >>> >>> When running DOMLoadingTestCase, you need to have the XDT module jar >>> file as well as the core XBRLAPI module jar file in the classpath. You >>> should just be able to download both of them from Sourceforge. >>> >>> Geoff S >>> >>> On 4 March 2016 at 13:45, Justin Lottes <jus...@gm...> wrote: >>> >>>> Hello. I've downloaded the source, installed all the dependencies and >>>> updated the test.configuration.properties file. >>>> >>>> I'm brand new to Maven (which may be my problem), so forgive me >>>> ignorance. The error is occuring in the file >>>> >>>> - >>>> C:\Users\Justin\Documents\NetBeansProjects\xbrlapi-source\xbrlapi-org_xbrlapi\org.xbrlapi\module-api\src\main\java\org\xbrlapi\DOMLoadingTestCase.java >>>> >>>> It's complaining about line 12, which is an import for >>>> org.xbrlapi.xdt.LoaderImpl, which is XBRLAPI XDT implementation. I tried >>>> making the API dependent on XDT Implementation, however, XDT Implementation >>>> is already dependent on API, which created a cyclic dependency, and maven >>>> yelled at me. Not sure what I'm doing wrong, any help would be appreciated. >>>> >>>> I've pasted the error message below... >>>> >>>> Thanks, >>>> >>>> Justin >>>> >>>> >>>> [ERROR] Failed to execute goal >>>> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile >>>> (default-compile) on project xbrlapi-api: Compilation failure: Compilation >>>> failure: >>>> [ERROR] >>>> /C:/Users/Justin/Documents/xbrlapi-org_xbrlapi-d772f1fc4fa4e6663e01a2e700066f5c8043bf96/org.xbrlapi/module-api/src/main/java/org/xbrlapi/DOMLoadingTestCase.java:[12,23] >>>> package org.xbrlapi.xdt does not exist >>>> [ERROR] >>>> /C:/Users/Justin/Documents/xbrlapi-org_xbrlapi-d772f1fc4fa4e6663e01a2e700066f5c8043bf96/org.xbrlapi/module-api/src/main/java/org/xbrlapi/DOMLoadingTestCase.java:[51,30] >>>> cannot find symbol >>>> [ERROR] symbol: class LoaderImpl >>>> [ERROR] location: class org.xbrlapi.DOMLoadingTestCase >>>> [ERROR] -> [Help 1] >>>> [ERROR] >>>> [ERROR] To see the full stack trace of the errors, re-run Maven with >>>> the -e switch. >>>> [ERROR] Re-run Maven using the -X switch to enable full debug logging. >>>> [ERROR] >>>> [ERROR] For more information about the errors and possible solutions, >>>> please read the following articles: >>>> [ERROR] [Help 1] >>>> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException >>>> [ERROR] >>>> [ERROR] After correcting the problems, you can resume the build with >>>> the command >>>> [ERROR] mvn <goals> -rf :xbrlapi-api >>>> >>>> >>>> C:\Users\Justin\Documents\xbrlapi-org_xbrlapi-d772f1fc4fa4e6663e01a2e700066f5c8043bf96\org.xbrlapi> >>>> >>>> -- >>>> This e-mail message is confidential, intended for the recipient(s) >>>> named above and may contain information that is privileged, exempt from >>>> disclosure under applicable law. If you are not the intended recipient, do >>>> not disclose or disseminate this message to anyone except the intended >>>> recipient. If you have received this message in error, or are not the >>>> named recipient(s), please immediately notify the sender by return e-mail, >>>> and delete all copies of this message. >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Site24x7 APM Insight: Get Deep Visibility into Application Performance >>>> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month >>>> Monitor end-to-end web transactions and take corrective actions now >>>> Troubleshoot faster and improve end-user experience. Signup Now! >>>> http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140 >>>> _______________________________________________ >>>> Xbrlapi-developer mailing list >>>> Xbr...@li... >>>> https://lists.sourceforge.net/lists/listinfo/xbrlapi-developer >>>> >>>> >>> >> >> >> -- >> This e-mail message is confidential, intended for the recipient(s) named >> above and may contain information that is privileged, exempt from >> disclosure under applicable law. If you are not the intended recipient, do >> not disclose or disseminate this message to anyone except the intended >> recipient. If you have received this message in error, or are not the >> named recipient(s), please immediately notify the sender by return e-mail, >> and delete all copies of this message. >> > > -- This e-mail message is confidential, intended for the recipient(s) named above and may contain information that is privileged, exempt from disclosure under applicable law. If you are not the intended recipient, do not disclose or disseminate this message to anyone except the intended recipient. If you have received this message in error, or are not the named recipient(s), please immediately notify the sender by return e-mail, and delete all copies of this message. |
From: Geoff S. <ge...@ga...> - 2016-03-04 04:15:29
|
That test case should not be part of the core XBRL API module. It should instead be a part of the XDT module testing. I will try to refactor it when I get a chance but in the meantime it would be safe to move the test case to the XDT module yourself or just delete it entirely. Regards Geoff S On 4 March 2016 at 14:36, Justin Lottes <jus...@gm...> wrote: > I'm sorry, I don't think I was clear in my previous email and subject. > I've downloaded the XBRL API source and am trying to build the xbrlapi > module library when I am getting the compile error. > > DOMLoadingTestCase resides in org.xbrlapi. It imports LoaderImpl from > org.xbrlapi.xdt. This import is failing. How can I build the xbrlapi-api > library (which depends on xbrlapi-xdt library) when xbrlapi-xdt depends on > xbrlapi-api library? It seems like a chicken and egg thing. Are you > saying I need an old xbrlapi-api.jar in order to build a new > xbrlapi-api.jar? > > On Thu, Mar 3, 2016 at 9:51 PM, Geoff Shuetrim <ge...@ga...> wrote: > >> Hi Justin, >> >> When running DOMLoadingTestCase, you need to have the XDT module jar >> file as well as the core XBRLAPI module jar file in the classpath. You >> should just be able to download both of them from Sourceforge. >> >> Geoff S >> >> On 4 March 2016 at 13:45, Justin Lottes <jus...@gm...> wrote: >> >>> Hello. I've downloaded the source, installed all the dependencies and >>> updated the test.configuration.properties file. >>> >>> I'm brand new to Maven (which may be my problem), so forgive me >>> ignorance. The error is occuring in the file >>> >>> - >>> C:\Users\Justin\Documents\NetBeansProjects\xbrlapi-source\xbrlapi-org_xbrlapi\org.xbrlapi\module-api\src\main\java\org\xbrlapi\DOMLoadingTestCase.java >>> >>> It's complaining about line 12, which is an import for >>> org.xbrlapi.xdt.LoaderImpl, which is XBRLAPI XDT implementation. I tried >>> making the API dependent on XDT Implementation, however, XDT Implementation >>> is already dependent on API, which created a cyclic dependency, and maven >>> yelled at me. Not sure what I'm doing wrong, any help would be appreciated. >>> >>> I've pasted the error message below... >>> >>> Thanks, >>> >>> Justin >>> >>> >>> [ERROR] Failed to execute goal >>> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile >>> (default-compile) on project xbrlapi-api: Compilation failure: Compilation >>> failure: >>> [ERROR] >>> /C:/Users/Justin/Documents/xbrlapi-org_xbrlapi-d772f1fc4fa4e6663e01a2e700066f5c8043bf96/org.xbrlapi/module-api/src/main/java/org/xbrlapi/DOMLoadingTestCase.java:[12,23] >>> package org.xbrlapi.xdt does not exist >>> [ERROR] >>> /C:/Users/Justin/Documents/xbrlapi-org_xbrlapi-d772f1fc4fa4e6663e01a2e700066f5c8043bf96/org.xbrlapi/module-api/src/main/java/org/xbrlapi/DOMLoadingTestCase.java:[51,30] >>> cannot find symbol >>> [ERROR] symbol: class LoaderImpl >>> [ERROR] location: class org.xbrlapi.DOMLoadingTestCase >>> [ERROR] -> [Help 1] >>> [ERROR] >>> [ERROR] To see the full stack trace of the errors, re-run Maven with the >>> -e switch. >>> [ERROR] Re-run Maven using the -X switch to enable full debug logging. >>> [ERROR] >>> [ERROR] For more information about the errors and possible solutions, >>> please read the following articles: >>> [ERROR] [Help 1] >>> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException >>> [ERROR] >>> [ERROR] After correcting the problems, you can resume the build with the >>> command >>> [ERROR] mvn <goals> -rf :xbrlapi-api >>> >>> >>> C:\Users\Justin\Documents\xbrlapi-org_xbrlapi-d772f1fc4fa4e6663e01a2e700066f5c8043bf96\org.xbrlapi> >>> >>> -- >>> This e-mail message is confidential, intended for the recipient(s) named >>> above and may contain information that is privileged, exempt from >>> disclosure under applicable law. If you are not the intended recipient, do >>> not disclose or disseminate this message to anyone except the intended >>> recipient. If you have received this message in error, or are not the >>> named recipient(s), please immediately notify the sender by return e-mail, >>> and delete all copies of this message. >>> >>> >>> ------------------------------------------------------------------------------ >>> Site24x7 APM Insight: Get Deep Visibility into Application Performance >>> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month >>> Monitor end-to-end web transactions and take corrective actions now >>> Troubleshoot faster and improve end-user experience. Signup Now! >>> http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140 >>> _______________________________________________ >>> Xbrlapi-developer mailing list >>> Xbr...@li... >>> https://lists.sourceforge.net/lists/listinfo/xbrlapi-developer >>> >>> >> > > > -- > This e-mail message is confidential, intended for the recipient(s) named > above and may contain information that is privileged, exempt from > disclosure under applicable law. If you are not the intended recipient, do > not disclose or disseminate this message to anyone except the intended > recipient. If you have received this message in error, or are not the > named recipient(s), please immediately notify the sender by return e-mail, > and delete all copies of this message. > |
From: Justin L. <jus...@gm...> - 2016-03-04 03:36:48
|
I'm sorry, I don't think I was clear in my previous email and subject. I've downloaded the XBRL API source and am trying to build the xbrlapi module library when I am getting the compile error. DOMLoadingTestCase resides in org.xbrlapi. It imports LoaderImpl from org.xbrlapi.xdt. This import is failing. How can I build the xbrlapi-api library (which depends on xbrlapi-xdt library) when xbrlapi-xdt depends on xbrlapi-api library? It seems like a chicken and egg thing. Are you saying I need an old xbrlapi-api.jar in order to build a new xbrlapi-api.jar? On Thu, Mar 3, 2016 at 9:51 PM, Geoff Shuetrim <ge...@ga...> wrote: > Hi Justin, > > When running DOMLoadingTestCase, you need to have the XDT module jar file > as well as the core XBRLAPI module jar file in the classpath. You should > just be able to download both of them from Sourceforge. > > Geoff S > > On 4 March 2016 at 13:45, Justin Lottes <jus...@gm...> wrote: > >> Hello. I've downloaded the source, installed all the dependencies and >> updated the test.configuration.properties file. >> >> I'm brand new to Maven (which may be my problem), so forgive me >> ignorance. The error is occuring in the file >> >> - >> C:\Users\Justin\Documents\NetBeansProjects\xbrlapi-source\xbrlapi-org_xbrlapi\org.xbrlapi\module-api\src\main\java\org\xbrlapi\DOMLoadingTestCase.java >> >> It's complaining about line 12, which is an import for >> org.xbrlapi.xdt.LoaderImpl, which is XBRLAPI XDT implementation. I tried >> making the API dependent on XDT Implementation, however, XDT Implementation >> is already dependent on API, which created a cyclic dependency, and maven >> yelled at me. Not sure what I'm doing wrong, any help would be appreciated. >> >> I've pasted the error message below... >> >> Thanks, >> >> Justin >> >> >> [ERROR] Failed to execute goal >> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile >> (default-compile) on project xbrlapi-api: Compilation failure: Compilation >> failure: >> [ERROR] >> /C:/Users/Justin/Documents/xbrlapi-org_xbrlapi-d772f1fc4fa4e6663e01a2e700066f5c8043bf96/org.xbrlapi/module-api/src/main/java/org/xbrlapi/DOMLoadingTestCase.java:[12,23] >> package org.xbrlapi.xdt does not exist >> [ERROR] >> /C:/Users/Justin/Documents/xbrlapi-org_xbrlapi-d772f1fc4fa4e6663e01a2e700066f5c8043bf96/org.xbrlapi/module-api/src/main/java/org/xbrlapi/DOMLoadingTestCase.java:[51,30] >> cannot find symbol >> [ERROR] symbol: class LoaderImpl >> [ERROR] location: class org.xbrlapi.DOMLoadingTestCase >> [ERROR] -> [Help 1] >> [ERROR] >> [ERROR] To see the full stack trace of the errors, re-run Maven with the >> -e switch. >> [ERROR] Re-run Maven using the -X switch to enable full debug logging. >> [ERROR] >> [ERROR] For more information about the errors and possible solutions, >> please read the following articles: >> [ERROR] [Help 1] >> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException >> [ERROR] >> [ERROR] After correcting the problems, you can resume the build with the >> command >> [ERROR] mvn <goals> -rf :xbrlapi-api >> >> >> C:\Users\Justin\Documents\xbrlapi-org_xbrlapi-d772f1fc4fa4e6663e01a2e700066f5c8043bf96\org.xbrlapi> >> >> -- >> This e-mail message is confidential, intended for the recipient(s) named >> above and may contain information that is privileged, exempt from >> disclosure under applicable law. If you are not the intended recipient, do >> not disclose or disseminate this message to anyone except the intended >> recipient. If you have received this message in error, or are not the >> named recipient(s), please immediately notify the sender by return e-mail, >> and delete all copies of this message. >> >> >> ------------------------------------------------------------------------------ >> Site24x7 APM Insight: Get Deep Visibility into Application Performance >> APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month >> Monitor end-to-end web transactions and take corrective actions now >> Troubleshoot faster and improve end-user experience. Signup Now! >> http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140 >> _______________________________________________ >> Xbrlapi-developer mailing list >> Xbr...@li... >> https://lists.sourceforge.net/lists/listinfo/xbrlapi-developer >> >> > -- This e-mail message is confidential, intended for the recipient(s) named above and may contain information that is privileged, exempt from disclosure under applicable law. If you are not the intended recipient, do not disclose or disseminate this message to anyone except the intended recipient. If you have received this message in error, or are not the named recipient(s), please immediately notify the sender by return e-mail, and delete all copies of this message. |
From: Geoff S. <ge...@ga...> - 2016-03-04 03:18:00
|
Hi Justin, When running DOMLoadingTestCase, you need to have the XDT module jar file as well as the core XBRLAPI module jar file in the classpath. You should just be able to download both of them from Sourceforge. Geoff S On 4 March 2016 at 13:45, Justin Lottes <jus...@gm...> wrote: > Hello. I've downloaded the source, installed all the dependencies and > updated the test.configuration.properties file. > > I'm brand new to Maven (which may be my problem), so forgive me > ignorance. The error is occuring in the file > > - > C:\Users\Justin\Documents\NetBeansProjects\xbrlapi-source\xbrlapi-org_xbrlapi\org.xbrlapi\module-api\src\main\java\org\xbrlapi\DOMLoadingTestCase.java > > It's complaining about line 12, which is an import for > org.xbrlapi.xdt.LoaderImpl, which is XBRLAPI XDT implementation. I tried > making the API dependent on XDT Implementation, however, XDT Implementation > is already dependent on API, which created a cyclic dependency, and maven > yelled at me. Not sure what I'm doing wrong, any help would be appreciated. > > I've pasted the error message below... > > Thanks, > > Justin > > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.1:compile > (default-compile) on project xbrlapi-api: Compilation failure: Compilation > failure: > [ERROR] > /C:/Users/Justin/Documents/xbrlapi-org_xbrlapi-d772f1fc4fa4e6663e01a2e700066f5c8043bf96/org.xbrlapi/module-api/src/main/java/org/xbrlapi/DOMLoadingTestCase.java:[12,23] > package org.xbrlapi.xdt does not exist > [ERROR] > /C:/Users/Justin/Documents/xbrlapi-org_xbrlapi-d772f1fc4fa4e6663e01a2e700066f5c8043bf96/org.xbrlapi/module-api/src/main/java/org/xbrlapi/DOMLoadingTestCase.java:[51,30] > cannot find symbol > [ERROR] symbol: class LoaderImpl > [ERROR] location: class org.xbrlapi.DOMLoadingTestCase > [ERROR] -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the > -e switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, > please read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException > [ERROR] > [ERROR] After correcting the problems, you can resume the build with the > command > [ERROR] mvn <goals> -rf :xbrlapi-api > > > C:\Users\Justin\Documents\xbrlapi-org_xbrlapi-d772f1fc4fa4e6663e01a2e700066f5c8043bf96\org.xbrlapi> > > -- > This e-mail message is confidential, intended for the recipient(s) named > above and may contain information that is privileged, exempt from > disclosure under applicable law. If you are not the intended recipient, do > not disclose or disseminate this message to anyone except the intended > recipient. If you have received this message in error, or are not the > named recipient(s), please immediately notify the sender by return e-mail, > and delete all copies of this message. > > > ------------------------------------------------------------------------------ > Site24x7 APM Insight: Get Deep Visibility into Application Performance > APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month > Monitor end-to-end web transactions and take corrective actions now > Troubleshoot faster and improve end-user experience. Signup Now! > http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140 > _______________________________________________ > Xbrlapi-developer mailing list > Xbr...@li... > https://lists.sourceforge.net/lists/listinfo/xbrlapi-developer > > |
From: Justin L. <jus...@gm...> - 2016-03-04 02:45:20
|
Hello. I've downloaded the source, installed all the dependencies and updated the test.configuration.properties file. I'm brand new to Maven (which may be my problem), so forgive me ignorance. The error is occuring in the file - C:\Users\Justin\Documents\NetBeansProjects\xbrlapi-source\xbrlapi-org_xbrlapi\org.xbrlapi\module-api\src\main\java\org\xbrlapi\DOMLoadingTestCase.java It's complaining about line 12, which is an import for org.xbrlapi.xdt.LoaderImpl, which is XBRLAPI XDT implementation. I tried making the API dependent on XDT Implementation, however, XDT Implementation is already dependent on API, which created a cyclic dependency, and maven yelled at me. Not sure what I'm doing wrong, any help would be appreciated. I've pasted the error message below... Thanks, Justin [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project xbrlapi-api: Compilation failure: Compilation failure: [ERROR] /C:/Users/Justin/Documents/xbrlapi-org_xbrlapi-d772f1fc4fa4e6663e01a2e700066f5c8043bf96/org.xbrlapi/module-api/src/main/java/org/xbrlapi/DOMLoadingTestCase.java:[12,23] package org.xbrlapi.xdt does not exist [ERROR] /C:/Users/Justin/Documents/xbrlapi-org_xbrlapi-d772f1fc4fa4e6663e01a2e700066f5c8043bf96/org.xbrlapi/module-api/src/main/java/org/xbrlapi/DOMLoadingTestCase.java:[51,30] cannot find symbol [ERROR] symbol: class LoaderImpl [ERROR] location: class org.xbrlapi.DOMLoadingTestCase [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :xbrlapi-api C:\Users\Justin\Documents\xbrlapi-org_xbrlapi-d772f1fc4fa4e6663e01a2e700066f5c8043bf96\org.xbrlapi> -- This e-mail message is confidential, intended for the recipient(s) named above and may contain information that is privileged, exempt from disclosure under applicable law. If you are not the intended recipient, do not disclose or disseminate this message to anyone except the intended recipient. If you have received this message in error, or are not the named recipient(s), please immediately notify the sender by return e-mail, and delete all copies of this message. |
From: Matthew D. <ro...@gm...> - 2013-05-23 17:40:29
|
Hi Geoff, I would like to be able to retrieve LinkRoles for the Presentation network in their original order; for example, see the top of Nike's Presentation Linkbase<http://www.sec.gov/Archives/edgar/data/320187/000119312510161874/nke-20100531_pre.xml>. I have drilled down to the source code and found the XML query for retrieving LinkRoles by ArcRole, but I am not sure if or how this order would be stored in the Berkeley XML database, and thus I don't know how to query it. I tried looking around to see how LinkRoles are persisted, but I got pretty lost. Would you please let me know if this is possible and give me an idea of how I would go about doing this? Thanks, Matt |
From: Matthew D. <ro...@gm...> - 2013-02-27 16:13:32
|
Hi Geoff, I would like to correct my prior e-mail slightly. The issue does not occur using the standard loader (at least as near as I can tell). I forgot to actually reload the instance between trials. Using the standard loader, the extra Dimensions (i.e. SegmentDomain) do not appear at all, since the standard loader is not aware of them. Matt |
From: Matthew D. <ro...@gm...> - 2013-02-22 14:45:27
|
Ha! Thanks Geoff, not sure how I missed that one. Matt |
From: Geoff S. <ge...@ga...> - 2013-02-20 23:02:43
|
Matt, Use the *getResolvedDenominatorMeasures<http://www.xbrlapi.org/javadoc/org/xbrlapi/Unit.html#getResolvedDenominatorMeasures()> *() and *getResolvedNumeratorMeasures<http://www.xbrlapi.org/javadoc/org/xbrlapi/Unit.html#getResolvedDenominatorMeasures()> *() methods of the Unit interface. Regards Geoff Shuetrim On 21 February 2013 08:36, Matthew DeAngelis <ro...@gm...> wrote: > Hi Geoff, > > I am trying to retrieve Measures from Units using getNumeratorMeasures(). > This function returns a Nodelist, and I cannot figure out how to access the > Measure objects from these Nodes. Simple casting does not work (as I > suppose I should have guessed). > > How can I do this? I want the Measure objects so that I can retrieve its > attributes. > > > Thanks, > Matt DeAngelis > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > _______________________________________________ > Xbrlapi-developer mailing list > Xbr...@li... > https://lists.sourceforge.net/lists/listinfo/xbrlapi-developer > > |
From: Matthew D. <ro...@gm...> - 2013-02-20 21:36:10
|
Hi Geoff, I am trying to retrieve Measures from Units using getNumeratorMeasures(). This function returns a Nodelist, and I cannot figure out how to access the Measure objects from these Nodes. Simple casting does not work (as I suppose I should have guessed). How can I do this? I want the Measure objects so that I can retrieve its attributes. Thanks, Matt DeAngelis |
From: Matthew D. <ro...@gm...> - 2012-08-15 20:11:55
|
Hi Geoff, I have been looking into Fact Sets and Aspect Values as a way of organizing XBRL Facts along Dimensions. However, I am having some trouble accessing the custom dimensions that are provided in a filing's Contexts. For example, Agilent's 2011 10-K<http://www.sec.gov/Archives/edgar/data/1090872/000104746911010124/a-20111031.xml>has many dimensions, such as "us-gaap:StatementEquityComponentsAxis". However, when I build an Aspect Model and populate a Fact Set, there are no Aspect Values for this Dimension. The only Aspect Values I can find are the defaults (Location, Period, Concept, etc.). I drilled down to the source code for building the Aspect Model, and it appears that the code attempts to find additional Dimensions by calling "getStore().<ExplicitDimension>getXMLResources(ExplicitDimensionImpl.class)". If I run this myself in my code, sure enough, I get an empty set. How can I access these extra Dimensions? I am able to get the Member values directly from each Contexts by calling getSegment() on the Entity. However, that is rather clunky, and I haven't yet figured out how to get the corresponding Axis, so it actually isn't too helpful. Ideally, I would be able to see all Dimensions and query only the Facts for a particular Dimension; from your Run example, I feel like this should be quite doable. Can you give me a hint about how I should be going about this? I've been banging away at it for a while. Regards, Matt DeAngelis |
From: Geoff S. <ge...@ga...> - 2011-12-07 23:35:26
|
Matt, You need to get familiar with is the query methods in the Store interface. The best way to get a feel for how to use these is to look at some of the query-based methods in the BaseStoreImpl class and in the InstanceImpl class. That should enable you to do all of the things you are after here. Note the one big tweak on XQueries: to select all of the root elements of XML resources in the data container that is your data store, you need to use the String #roots#. That gets substituted for in a datastore-implementation specific way by each data store implementation so that the XQuery is valid XQuery syntax. Thus, to get the facts in an XBRL instance, the query would be: String query = "for $fact in #roots#[@fact and @uri=' http://my.com/example/instance.xml'] return $fact"; This exploits the design feature of the XBRLAPI that each fragment of an XBRL document is wrapped in metadata XML. For each fact, that metadata has a @fact attribute and it contains the URI of the containing document. I really should document all of this XML stuff at the fragment metadata level but time is hard to find. You run that query with: store.<Fact>queryForXMLResources(query); You can also return strings or a count of the number of query results using other methods in the Store interface. Using this more specifically for your specific requirements we have: 1. Getting all namespaces of facts in an XBRL instance: String query = " for $fact in #roots#[@fact and @uri=' http://my.com/example/instance.xml'] return namespace-uri($fact)"; Set<String> factNamespaces = store.queryForStrings(query); 2. To retrieve facts in a specific namespace, can use: String query = " for $fact in #roots#[@fact and @uri=' http://my.com/example/instance.xml'] and namespace-uri(xbrlapi:data/*)=' http://eg.com/example/namespace'] return namespace-uri($fact)"; Set<String> factNamespaces = store.<Fact>queryForXMLResources(query); I am not totally convinced of the performance of this last suggestion but if it is not great, try getting all of the facts and then filtering them out of a fact set based on a customised concept-namespace aspect (where the aspect is like a standard concept aspect except that values vary only with namespace and not local name. None of the above has been tested but let me know if you have troubles. Good luck, and send me any working code for these queries. I think they make good sense as additions to the main API. Regards Geoff Shuetrim On 7 December 2011 08:25, Matthew DeAngelis <ro...@gm...> wrote: > Hi Geoff, > > Is there a way to retrieve a list of namespaces for a particular SEC > filing? I would like to retrieve Facts specifically for the "dei" > namespace in a pile of files, and they vary based on the time period > because there is a US GAAP taxonomy change in 2011. I am thinking that I > can get all of the namespaces and regex through them to get the "dei" one. > > > Regards, > Matt DeAngelis > > > ------------------------------------------------------------------------------ > Cloud Services Checklist: Pricing and Packaging Optimization > This white paper is intended to serve as a reference, checklist and point > of > discussion for anyone considering optimizing the pricing and packaging > model > of a cloud services business. Read Now! > http://www.accelacomm.com/jaw/sfnl/114/51491232/ > _______________________________________________ > Xbrlapi-developer mailing list > Xbr...@li... > https://lists.sourceforge.net/lists/listinfo/xbrlapi-developer > > |
From: Matthew D. <ro...@gm...> - 2011-12-06 21:25:10
|
Hi Geoff, Is there a way to retrieve a list of namespaces for a particular SEC filing? I would like to retrieve Facts specifically for the "dei" namespace in a pile of files, and they vary based on the time period because there is a US GAAP taxonomy change in 2011. I am thinking that I can get all of the namespaces and regex through them to get the "dei" one. Regards, Matt DeAngelis |
From: Matthew D. <ro...@gm...> - 2011-07-30 14:01:35
|
Hi Geoff, I am experiencing a runtime error that I do not understand, related to two instances. The instances are: http://www.sec.gov/Archives/edgar/data/1043121/000119312510040838/cik000104312-20091231.xml http://www.sec.gov/Archives/edgar/data/1037540/000119312510040826/bxp-20091231.xml The error occurs when I run the function: List<Concept> concepts = instance.getAllConcepts(); on either instance. The error is: XBRLAPI Exception: Schemas with URLs: http://www.sec.gov/Archives/edgar/data/1037540/000119312510040826/bxp-20091231.xsd: http://www.sec.gov/Archives/edgar/data/1043121/000119312510040838/cik000104312-20091231.xsd, have the target namespace http://www.bostonproperties.com/20091231 After perusing the schema files in question, I agree that they do, in fact, have that target namespace, but I'm not sure why that is creating a problem. This is not an urgent need; I am going to remove these two instance documents from my sample and continue. However, I would be interested to know what is going on here, and if there is a bug that needs to be addressed. Regards, Matt DeAngelis |
From: Geoff S. <ge...@ga...> - 2011-07-14 22:10:59
|
Matthew, Something else to consider: the rendering example does a lot more than just aspect analysis. The biggest impact on your processing is its work exploring presentation relationship networks which get quite massive in a huge datastore where everyone feels like tailoring the one network to their own purposes. What I suggest is that you cut that work out an use only the aspect analysis component of the code. You won't get the pretty printing of the data but you will get a nice factSet that can be rapidly extracted to a relational database, as I think you were working towards a while ago. Regards Geoff Shuetrim On 15 July 2011 07:36, Geoff Shuetrim <ge...@ga...> wrote: > Matthew, > > What you have found is indeed a feature of the XBRLAPI. The API is > designed to work with large stores of data, seeing through the boundaries of > XBRL instances. For data like the SEC filings, where many filings make a > range of changes to the metadata in taxonomies, in ways that are often > contradictory and sometimes inconsistent, this means that the traditional > approach to working with linkbase networks of relationships can result in > useless network analysis. It also can mean that things take a long time. > > I spent some time back when the SEC was looking for a visualisation tool, > thinking about how to deal with specific instances only and that led to some > new features in the API. > > Specifically, you can specify that query results in a store have to come > from a specified set of documents (where those documents are those in a DTS > of interest). That is done with the data store setFilteringURIs method. > > What URIs should you use? Find that out by using the data store's > getMinimumDocumentSet methods. > > That works but not with the kind of performance that I would like > (basically because it makes XQuery where clauses too lengthy). I would be > interested in your mileage on that but I am guessing it will be inadequate. > > You could write the documents in the target set of URIs to a separate data > store and work from there. I have never tried that but it may perform OK. > > I hope that at least clarifies what your situation is. Any progress you > can make on this will definitely be useful to others, if not myself, if I > can incorporate the features into the XBRLAPI. Let me know if I can help in > any way. > > Regards > > Geoff Shuetrim > > > > > On 15 July 2011 01:49, Matthew DeAngelis <ro...@gm...> wrote: > >> Hi Geoff, >> >> I think that the problems I am having with my large datastore are caused >> by aspect models and linkroles being based on stores, not instances. As a >> result, when I invoke these functions, the program loads every relationship >> in the data store, and not just those for a particular instance. >> Understandably, this crushes the system. >> >> Is there a way to build aspect models and use linkroles from a particular >> instance only? I could not find a way in the documentation, but I could >> easily have missed it. If not, is there an easy way to create a kind of >> "virtual store", which pulls documents and persisted relationships for a >> particular instance out of the main store, but can be treated as a store >> object? If not, I will look into how to do this, and would appreciate any >> guidance that you might have. >> >> >> Regards, >> Matt >> >> >> On Mon, Jul 11, 2011 at 5:29 PM, Matthew DeAngelis <ro...@gm...>wrote: >> >>> Hi Geoff, >>> >>> Using the new material with a fresh, single-item BDBXML database works as >>> expected. Using the new material with my existing database (which contains >>> over 500 instance and related documents) continues to have (some) of the >>> same problems. As such, I'm going to guess that the slowdown and tendency >>> for memory overruns is due to database performance. I will try rebuilding >>> my database again to see if there is anything wrong with it. >>> >>> If it turns out that the size is the problem, I will have to look into >>> tuning BDBXML databases. Have you noticed scaling issues with these >>> databases in the past? From your experience, do you think that an eXist >>> database would fare better with a large volume of data? >>> >>> Thanks for helping me troubleshoot. Now that I can at least work with a >>> single document quickly, I am starting to make progress. >>> >>> >>> Regards, >>> Matt >>> >>> On Sun, Jul 10, 2011 at 12:41 AM, Geoff Shuetrim <ge...@ga...>wrote: >>> >>>> Matthew, >>>> >>>> I have tried running the Rendering example using a fresh BDBXML data >>>> store. The data took a few minutes to load (less than 10) and then then >>>> rendering process was done in about 1-2 minutes. I used 2 GB of memory and >>>> did not come up against any constraints. >>>> >>>> Note that the freemarker template does need a couple of small changes to >>>> get it to work with the updated freemarker library and to reflect the change >>>> in the name of the serializeToString method of the Store interface. To get >>>> the fixed version of the Freemarker template, you can update from SVN or >>>> just download it directly from *http://tinyurl.com/3aubjlj >>>> >>>> *The rendering result is in a file in SVN "result.html" that is also >>>> accessible from the SVN browse facility. >>>> >>>> Without more information I am not going to be able to troubleshoot your >>>> problem. Give it a go with the revised freemarker template in SVN and let >>>> me know if the problems persist. If they do, you might need to do some of >>>> your own timing evaluations to point us in the right direction. >>>> >>>> Regards >>>> >>>> Geoff Shuetrim >>>> >>>> >>>> >>>> >>>> On 9 July 2011 05:50, Matthew DeAngelis <ro...@gm...> wrote: >>>> >>>>> Hi Geoff, >>>>> >>>>> Following your suggestion, I have started to work with aspects and >>>>> dimensions. I have been using your rendering example >>>>> (org.xbrlapi.data.bdbxml.examples.render, Run.java) as a template, since it >>>>> uses fact sets and aspect values in a number of ways. After some problems >>>>> with my own code, I attempted to run your Run.java example as is, on >>>>> http://www.sec.gov/Archives/edgar/data/93751/000119312510036481/stt-20091231.xml, >>>>> which is in the data store. It takes a long time to run (upwards of a half >>>>> hour), and fails with an Out of Memory Error after line 394 (once it starts >>>>> building childConcepts). My memory is set to 2G (out of 4G system memory), >>>>> which I should think would be sufficient for a single instance document. >>>>> >>>>> As mentioned above, I have also been running into problems with my own >>>>> code, attempting to load all of the Facts in a particular instance into a >>>>> FactSet. Specifically, if I retrieve all Facts from an instance, and >>>>> attempt to use addFacts() to add those Facts to the FactSet, I get an Out of >>>>> Memory Error. I have taken to loading the Facts one by one into the >>>>> FactSet, which bypasses the memory error, but is extremely slow: each Fact >>>>> takes many minutes (while writing, I have watched one take over 10 minutes). >>>>> This does not make sense to me: if all of the Facts can be retrieved from >>>>> the instance in a few seconds, I do not see why adding the Facts to the >>>>> FactSet should take so long. In an attempt to speed the process up, I ran >>>>> your utility to persist relationships >>>>> (org.xbrlapi.data.bdbxml.examples.utilities, >>>>> PersistAllRelationshipsInStore.java), but this did not help. I am beginning >>>>> to suspect that there is something wrong with my installation, but I am not >>>>> sure how to figure out what. >>>>> >>>>> Do you have any idea why these functions are eating so much memory and >>>>> CPU on my system? I don't think that I will be able to use dimensions and >>>>> aspects for my project (which will analyze over 500 annual reports) if I >>>>> cannot solve these issues. >>>>> >>>>> >>>>> Regards, >>>>> Matt >>>>> >>>>> >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> All of the data generated in your IT infrastructure is seriously >>>>> valuable. >>>>> Why? It contains a definitive record of application performance, >>>>> security >>>>> threats, fraudulent activity, and more. Splunk takes this data and >>>>> makes >>>>> sense of it. IT sense. And common sense. >>>>> http://p.sf.net/sfu/splunk-d2d-c2 >>>>> _______________________________________________ >>>>> Xbrlapi-developer mailing list >>>>> Xbr...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/xbrlapi-developer >>>>> >>>>> >>>> >>> >> > |
From: Geoff S. <ge...@ga...> - 2011-07-14 21:37:04
|
Matthew, What you have found is indeed a feature of the XBRLAPI. The API is designed to work with large stores of data, seeing through the boundaries of XBRL instances. For data like the SEC filings, where many filings make a range of changes to the metadata in taxonomies, in ways that are often contradictory and sometimes inconsistent, this means that the traditional approach to working with linkbase networks of relationships can result in useless network analysis. It also can mean that things take a long time. I spent some time back when the SEC was looking for a visualisation tool, thinking about how to deal with specific instances only and that led to some new features in the API. Specifically, you can specify that query results in a store have to come from a specified set of documents (where those documents are those in a DTS of interest). That is done with the data store setFilteringURIs method. What URIs should you use? Find that out by using the data store's getMinimumDocumentSet methods. That works but not with the kind of performance that I would like (basically because it makes XQuery where clauses too lengthy). I would be interested in your mileage on that but I am guessing it will be inadequate. You could write the documents in the target set of URIs to a separate data store and work from there. I have never tried that but it may perform OK. I hope that at least clarifies what your situation is. Any progress you can make on this will definitely be useful to others, if not myself, if I can incorporate the features into the XBRLAPI. Let me know if I can help in any way. Regards Geoff Shuetrim On 15 July 2011 01:49, Matthew DeAngelis <ro...@gm...> wrote: > Hi Geoff, > > I think that the problems I am having with my large datastore are caused by > aspect models and linkroles being based on stores, not instances. As a > result, when I invoke these functions, the program loads every relationship > in the data store, and not just those for a particular instance. > Understandably, this crushes the system. > > Is there a way to build aspect models and use linkroles from a particular > instance only? I could not find a way in the documentation, but I could > easily have missed it. If not, is there an easy way to create a kind of > "virtual store", which pulls documents and persisted relationships for a > particular instance out of the main store, but can be treated as a store > object? If not, I will look into how to do this, and would appreciate any > guidance that you might have. > > > Regards, > Matt > > > On Mon, Jul 11, 2011 at 5:29 PM, Matthew DeAngelis <ro...@gm...>wrote: > >> Hi Geoff, >> >> Using the new material with a fresh, single-item BDBXML database works as >> expected. Using the new material with my existing database (which contains >> over 500 instance and related documents) continues to have (some) of the >> same problems. As such, I'm going to guess that the slowdown and tendency >> for memory overruns is due to database performance. I will try rebuilding >> my database again to see if there is anything wrong with it. >> >> If it turns out that the size is the problem, I will have to look into >> tuning BDBXML databases. Have you noticed scaling issues with these >> databases in the past? From your experience, do you think that an eXist >> database would fare better with a large volume of data? >> >> Thanks for helping me troubleshoot. Now that I can at least work with a >> single document quickly, I am starting to make progress. >> >> >> Regards, >> Matt >> >> On Sun, Jul 10, 2011 at 12:41 AM, Geoff Shuetrim <ge...@ga...>wrote: >> >>> Matthew, >>> >>> I have tried running the Rendering example using a fresh BDBXML data >>> store. The data took a few minutes to load (less than 10) and then then >>> rendering process was done in about 1-2 minutes. I used 2 GB of memory and >>> did not come up against any constraints. >>> >>> Note that the freemarker template does need a couple of small changes to >>> get it to work with the updated freemarker library and to reflect the change >>> in the name of the serializeToString method of the Store interface. To get >>> the fixed version of the Freemarker template, you can update from SVN or >>> just download it directly from *http://tinyurl.com/3aubjlj >>> >>> *The rendering result is in a file in SVN "result.html" that is also >>> accessible from the SVN browse facility. >>> >>> Without more information I am not going to be able to troubleshoot your >>> problem. Give it a go with the revised freemarker template in SVN and let >>> me know if the problems persist. If they do, you might need to do some of >>> your own timing evaluations to point us in the right direction. >>> >>> Regards >>> >>> Geoff Shuetrim >>> >>> >>> >>> >>> On 9 July 2011 05:50, Matthew DeAngelis <ro...@gm...> wrote: >>> >>>> Hi Geoff, >>>> >>>> Following your suggestion, I have started to work with aspects and >>>> dimensions. I have been using your rendering example >>>> (org.xbrlapi.data.bdbxml.examples.render, Run.java) as a template, since it >>>> uses fact sets and aspect values in a number of ways. After some problems >>>> with my own code, I attempted to run your Run.java example as is, on >>>> http://www.sec.gov/Archives/edgar/data/93751/000119312510036481/stt-20091231.xml, >>>> which is in the data store. It takes a long time to run (upwards of a half >>>> hour), and fails with an Out of Memory Error after line 394 (once it starts >>>> building childConcepts). My memory is set to 2G (out of 4G system memory), >>>> which I should think would be sufficient for a single instance document. >>>> >>>> As mentioned above, I have also been running into problems with my own >>>> code, attempting to load all of the Facts in a particular instance into a >>>> FactSet. Specifically, if I retrieve all Facts from an instance, and >>>> attempt to use addFacts() to add those Facts to the FactSet, I get an Out of >>>> Memory Error. I have taken to loading the Facts one by one into the >>>> FactSet, which bypasses the memory error, but is extremely slow: each Fact >>>> takes many minutes (while writing, I have watched one take over 10 minutes). >>>> This does not make sense to me: if all of the Facts can be retrieved from >>>> the instance in a few seconds, I do not see why adding the Facts to the >>>> FactSet should take so long. In an attempt to speed the process up, I ran >>>> your utility to persist relationships >>>> (org.xbrlapi.data.bdbxml.examples.utilities, >>>> PersistAllRelationshipsInStore.java), but this did not help. I am beginning >>>> to suspect that there is something wrong with my installation, but I am not >>>> sure how to figure out what. >>>> >>>> Do you have any idea why these functions are eating so much memory and >>>> CPU on my system? I don't think that I will be able to use dimensions and >>>> aspects for my project (which will analyze over 500 annual reports) if I >>>> cannot solve these issues. >>>> >>>> >>>> Regards, >>>> Matt >>>> >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> All of the data generated in your IT infrastructure is seriously >>>> valuable. >>>> Why? It contains a definitive record of application performance, >>>> security >>>> threats, fraudulent activity, and more. Splunk takes this data and makes >>>> sense of it. IT sense. And common sense. >>>> http://p.sf.net/sfu/splunk-d2d-c2 >>>> _______________________________________________ >>>> Xbrlapi-developer mailing list >>>> Xbr...@li... >>>> https://lists.sourceforge.net/lists/listinfo/xbrlapi-developer >>>> >>>> >>> >> > |
From: Matthew D. <ro...@gm...> - 2011-07-14 15:49:56
|
Hi Geoff, I think that the problems I am having with my large datastore are caused by aspect models and linkroles being based on stores, not instances. As a result, when I invoke these functions, the program loads every relationship in the data store, and not just those for a particular instance. Understandably, this crushes the system. Is there a way to build aspect models and use linkroles from a particular instance only? I could not find a way in the documentation, but I could easily have missed it. If not, is there an easy way to create a kind of "virtual store", which pulls documents and persisted relationships for a particular instance out of the main store, but can be treated as a store object? If not, I will look into how to do this, and would appreciate any guidance that you might have. Regards, Matt On Mon, Jul 11, 2011 at 5:29 PM, Matthew DeAngelis <ro...@gm...>wrote: > Hi Geoff, > > Using the new material with a fresh, single-item BDBXML database works as > expected. Using the new material with my existing database (which contains > over 500 instance and related documents) continues to have (some) of the > same problems. As such, I'm going to guess that the slowdown and tendency > for memory overruns is due to database performance. I will try rebuilding > my database again to see if there is anything wrong with it. > > If it turns out that the size is the problem, I will have to look into > tuning BDBXML databases. Have you noticed scaling issues with these > databases in the past? From your experience, do you think that an eXist > database would fare better with a large volume of data? > > Thanks for helping me troubleshoot. Now that I can at least work with a > single document quickly, I am starting to make progress. > > > Regards, > Matt > > On Sun, Jul 10, 2011 at 12:41 AM, Geoff Shuetrim <ge...@ga...> wrote: > >> Matthew, >> >> I have tried running the Rendering example using a fresh BDBXML data >> store. The data took a few minutes to load (less than 10) and then then >> rendering process was done in about 1-2 minutes. I used 2 GB of memory and >> did not come up against any constraints. >> >> Note that the freemarker template does need a couple of small changes to >> get it to work with the updated freemarker library and to reflect the change >> in the name of the serializeToString method of the Store interface. To get >> the fixed version of the Freemarker template, you can update from SVN or >> just download it directly from *http://tinyurl.com/3aubjlj >> >> *The rendering result is in a file in SVN "result.html" that is also >> accessible from the SVN browse facility. >> >> Without more information I am not going to be able to troubleshoot your >> problem. Give it a go with the revised freemarker template in SVN and let >> me know if the problems persist. If they do, you might need to do some of >> your own timing evaluations to point us in the right direction. >> >> Regards >> >> Geoff Shuetrim >> >> >> >> >> On 9 July 2011 05:50, Matthew DeAngelis <ro...@gm...> wrote: >> >>> Hi Geoff, >>> >>> Following your suggestion, I have started to work with aspects and >>> dimensions. I have been using your rendering example >>> (org.xbrlapi.data.bdbxml.examples.render, Run.java) as a template, since it >>> uses fact sets and aspect values in a number of ways. After some problems >>> with my own code, I attempted to run your Run.java example as is, on >>> http://www.sec.gov/Archives/edgar/data/93751/000119312510036481/stt-20091231.xml, >>> which is in the data store. It takes a long time to run (upwards of a half >>> hour), and fails with an Out of Memory Error after line 394 (once it starts >>> building childConcepts). My memory is set to 2G (out of 4G system memory), >>> which I should think would be sufficient for a single instance document. >>> >>> As mentioned above, I have also been running into problems with my own >>> code, attempting to load all of the Facts in a particular instance into a >>> FactSet. Specifically, if I retrieve all Facts from an instance, and >>> attempt to use addFacts() to add those Facts to the FactSet, I get an Out of >>> Memory Error. I have taken to loading the Facts one by one into the >>> FactSet, which bypasses the memory error, but is extremely slow: each Fact >>> takes many minutes (while writing, I have watched one take over 10 minutes). >>> This does not make sense to me: if all of the Facts can be retrieved from >>> the instance in a few seconds, I do not see why adding the Facts to the >>> FactSet should take so long. In an attempt to speed the process up, I ran >>> your utility to persist relationships >>> (org.xbrlapi.data.bdbxml.examples.utilities, >>> PersistAllRelationshipsInStore.java), but this did not help. I am beginning >>> to suspect that there is something wrong with my installation, but I am not >>> sure how to figure out what. >>> >>> Do you have any idea why these functions are eating so much memory and >>> CPU on my system? I don't think that I will be able to use dimensions and >>> aspects for my project (which will analyze over 500 annual reports) if I >>> cannot solve these issues. >>> >>> >>> Regards, >>> Matt >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> All of the data generated in your IT infrastructure is seriously >>> valuable. >>> Why? It contains a definitive record of application performance, security >>> threats, fraudulent activity, and more. Splunk takes this data and makes >>> sense of it. IT sense. And common sense. >>> http://p.sf.net/sfu/splunk-d2d-c2 >>> _______________________________________________ >>> Xbrlapi-developer mailing list >>> Xbr...@li... >>> https://lists.sourceforge.net/lists/listinfo/xbrlapi-developer >>> >>> >> > |
From: Shwetha S. <shw...@om...> - 2011-07-14 11:52:45
|
Hi Everyone, I am new to XBRLAPI and have no knowledge of it. I have configured xbrl package as specified in www.xbrlapi.org. After configuring i tried to compile ..\module-examples\src\main\java\org\xbrlapi\data\bdbxml\examples\load\Load.java While i try to do this it is throwing me an error as below cannot access org.apache.xerces.xni.parser.XMLEntityResolver class file for org.apache.xerces.xni.parser.XMLEntityResolver not found Loader myLoader = new LoaderImpl(store,xlinkProcessor, entityResolver); I fear if my configurations have gone wrong. Please help me on this. Thank You. Shwetha |
From: Geoff S. <ge...@ga...> - 2011-07-10 04:41:17
|
Matthew, I have tried running the Rendering example using a fresh BDBXML data store. The data took a few minutes to load (less than 10) and then then rendering process was done in about 1-2 minutes. I used 2 GB of memory and did not come up against any constraints. Note that the freemarker template does need a couple of small changes to get it to work with the updated freemarker library and to reflect the change in the name of the serializeToString method of the Store interface. To get the fixed version of the Freemarker template, you can update from SVN or just download it directly from *http://tinyurl.com/3aubjlj *The rendering result is in a file in SVN "result.html" that is also accessible from the SVN browse facility. Without more information I am not going to be able to troubleshoot your problem. Give it a go with the revised freemarker template in SVN and let me know if the problems persist. If they do, you might need to do some of your own timing evaluations to point us in the right direction. Regards Geoff Shuetrim On 9 July 2011 05:50, Matthew DeAngelis <ro...@gm...> wrote: > Hi Geoff, > > Following your suggestion, I have started to work with aspects and > dimensions. I have been using your rendering example > (org.xbrlapi.data.bdbxml.examples.render, Run.java) as a template, since it > uses fact sets and aspect values in a number of ways. After some problems > with my own code, I attempted to run your Run.java example as is, on > http://www.sec.gov/Archives/edgar/data/93751/000119312510036481/stt-20091231.xml, > which is in the data store. It takes a long time to run (upwards of a half > hour), and fails with an Out of Memory Error after line 394 (once it starts > building childConcepts). My memory is set to 2G (out of 4G system memory), > which I should think would be sufficient for a single instance document. > > As mentioned above, I have also been running into problems with my own > code, attempting to load all of the Facts in a particular instance into a > FactSet. Specifically, if I retrieve all Facts from an instance, and > attempt to use addFacts() to add those Facts to the FactSet, I get an Out of > Memory Error. I have taken to loading the Facts one by one into the > FactSet, which bypasses the memory error, but is extremely slow: each Fact > takes many minutes (while writing, I have watched one take over 10 minutes). > This does not make sense to me: if all of the Facts can be retrieved from > the instance in a few seconds, I do not see why adding the Facts to the > FactSet should take so long. In an attempt to speed the process up, I ran > your utility to persist relationships > (org.xbrlapi.data.bdbxml.examples.utilities, > PersistAllRelationshipsInStore.java), but this did not help. I am beginning > to suspect that there is something wrong with my installation, but I am not > sure how to figure out what. > > Do you have any idea why these functions are eating so much memory and CPU > on my system? I don't think that I will be able to use dimensions and > aspects for my project (which will analyze over 500 annual reports) if I > cannot solve these issues. > > > Regards, > Matt > > > > > ------------------------------------------------------------------------------ > All of the data generated in your IT infrastructure is seriously valuable. > Why? It contains a definitive record of application performance, security > threats, fraudulent activity, and more. Splunk takes this data and makes > sense of it. IT sense. And common sense. > http://p.sf.net/sfu/splunk-d2d-c2 > _______________________________________________ > Xbrlapi-developer mailing list > Xbr...@li... > https://lists.sourceforge.net/lists/listinfo/xbrlapi-developer > > |
From: Matthew D. <ro...@gm...> - 2011-07-08 19:50:35
|
Hi Geoff, Following your suggestion, I have started to work with aspects and dimensions. I have been using your rendering example (org.xbrlapi.data.bdbxml.examples.render, Run.java) as a template, since it uses fact sets and aspect values in a number of ways. After some problems with my own code, I attempted to run your Run.java example as is, on http://www.sec.gov/Archives/edgar/data/93751/000119312510036481/stt-20091231.xml, which is in the data store. It takes a long time to run (upwards of a half hour), and fails with an Out of Memory Error after line 394 (once it starts building childConcepts). My memory is set to 2G (out of 4G system memory), which I should think would be sufficient for a single instance document. As mentioned above, I have also been running into problems with my own code, attempting to load all of the Facts in a particular instance into a FactSet. Specifically, if I retrieve all Facts from an instance, and attempt to use addFacts() to add those Facts to the FactSet, I get an Out of Memory Error. I have taken to loading the Facts one by one into the FactSet, which bypasses the memory error, but is extremely slow: each Fact takes many minutes (while writing, I have watched one take over 10 minutes). This does not make sense to me: if all of the Facts can be retrieved from the instance in a few seconds, I do not see why adding the Facts to the FactSet should take so long. In an attempt to speed the process up, I ran your utility to persist relationships (org.xbrlapi.data.bdbxml.examples.utilities, PersistAllRelationshipsInStore.java), but this did not help. I am beginning to suspect that there is something wrong with my installation, but I am not sure how to figure out what. Do you have any idea why these functions are eating so much memory and CPU on my system? I don't think that I will be able to use dimensions and aspects for my project (which will analyze over 500 annual reports) if I cannot solve these issues. Regards, Matt |
From: Geoff S. <ge...@ga...> - 2011-06-18 01:19:15
|
You will need both to use SEC filings Sent from my mobile device. On 17/06/2011, at 23:16, Matthew DeAngelis <ro...@gm...> wrote: Hi Geoff, Thanks for the attention to this. I did not know enough about the schema location attribute to look closely for it, but I see it now in comparing the DAR instance to another instance. It is good to know that the schemaRef element is not recognizable to Xerces. I will verify that the information I want is being entered into the data store, but from your response below, it doesn't look like this is going to be a big deal for me. I will take a deeper dive into the aspect package. Right now, dimensions and aspects look a little intimidating, but they are too good to pass up. Regards, Matt On Thu, Jun 16, 2011 at 8:01 PM, Geoff Shuetrim <ge...@ga...> wrote: > Matthew > > Looking at the example XBRL instance ( > http://www.sec.gov/Archives/edgar/data/916540/000091654011000018/dar-20110615.xml) > the problem is straightforward - there is no use of the schema location > attribute at all. In the previous version of the XBRLAPI, this would have > thrown a validation error much earlier but now, I am caching the official > XBRL schemas in the grammar pool before doing any SAX parsing of XBRL > instances. This makes sure that the XBRL specification defined elements are > validated using XML schema, even if nothing else is. The instance is > clearly expecting the XBRL schemaRef element to be sufficient for the > processor to determine what schema to use for validation of the instance but > the schemaRef element and its semantics are not understood by the Xerces > parser. In a future version of the XBRLAPI, I may include a preprocessing > step that would scan documents for such ref elements to build up the grammar > pool in advance of XML Schema validation but that is not a high priority for > me at the moment. It adds to the processing time and is still not going to > lead to full XBRL validation. > > The reason to be careful with this is that, without XML schema validation, > things like XML Schema default values for elements and attributes are not > going to be added to the post validation infoset. Such defaults are used > pretty rarely in XBRL - because their usage if fraught with problems like > this one. If you want to be sure, try XQuerying the data store for usage of > default attributes and fixed attributes in the fragments that extend the > XMLSchemaContent class. That is about all I have to suggest at this stage. > > Regards > > Geoff Shuetrim > > > On 17 June 2011 08:03, Geoff Shuetrim <ge...@ga...> wrote: > >> I have recently made changes that make the XBRLAPI more demanding at the >> XML Schema validation stage, at least in terms of what it checks and >> reports. The changes do not alter what is actually being loaded into the >> data store but they do let you know what was and was not validated using >> Xerces XML Schema validation. >> >> The kind of error being found by you at the schema validation stage has >> been turning up for me in three difference circumstances: >> >> 1. When the xsi:schemaLocation attribute is not providing enough >> information to find all of the schemas required to do schema validation; and >> >> 2. When more than one schema has the same target namespace - such as >> occurs in the XBRL 2.1 conformance test suite. >> >> 3. When there is an XML Schema validity issue in the file being parsed. >> >> I am not sure what the right step to take regarding 1 is (perhaps it is to >> do DTS discovery first, find all relevant schemas, and then to do XML schema >> validation but that seems to be putting the cart before the horse a bit.) >> but for 2, the problem used to arise for me because the XML Schema grammar >> pool caches the first schema it encounters for the namespace and then >> continues using it without augmenting that schema grammar with information >> from other schemas with the same target namespace. I thought I had fixed 2 >> by locking the grammar pool after adding just the main xbrl XML Schemas but >> perhaps that was not sufficient. I am guessing I can ignore 3. >> >> I will take a look at the example files you provided links to and see if >> we have a new scenario in which this issue arises. In the meantime, the >> files should be loading into the data store OK so long as they are XBRL >> valid. The XBRLAPI is designed to be as robust to things like this as >> possible, kind of like a web browser is to wierd HTML markup. >> >> Regards >> >> Geoff S >> >> On 17 June 2011 06:29, Matthew DeAngelis <ro...@gm...> wrote: >> >>> Hi all (and especially Geoff): >>> >>> I am running the LoadAllSECFilings example (on both the provided RSS feed >>> and http://www.sec.gov/Archives/edgar/usgaap.rss.xml). While the loader >>> threads are running, I regularly get errors of the form below: >>> >>> ERROR BaseContentHandlerImpl.java 128 [error] - :cvc-complex-type.2.4.a: >>> Invalid content was found starting with element >>> 'dar:DecreaseInLongTermPensionLiability'. One of '{" >>> http://www.xbrl.org/2003/instance":item, " >>> http://www.xbrl.org/2003/instance":tuple, " >>> http://www.xbrl.org/2003/instance":context, " >>> http://www.xbrl.org/2003/instance":unit, " >>> http://www.xbrl.org/2003/linkbase":footnoteLink}' is expected.: on line >>> number 479 >>> >>> From reading the documentation, I gather that this is due to the element >>> not being located in the schema information provided in the instance. >>> However, in the above case (instance document: >>> http://www.sec.gov/Archives/edgar/data/916540/000091654011000018/dar-20110615.xml, >>> schema document: >>> http://www.sec.gov/Archives/edgar/data/916540/000091654011000018/dar-20110615.xsd), >>> the element is defined, not in the standard schema, but in the .xsd file. >>> This definition does not appear to be malformed. >>> >>> Since the API is also pulling the .xsd file, and it seems to recognize >>> other non-standard elements, I am not sure why this error is occurring. >>> Some of these errors are on US GAAP elements as well. I am uncomfortable >>> with the idea that one, seemingly random, element, may be missing from every >>> third or fourth XBRL report, so I would like to correct this if possible >>> (before I start manipulating the data). >>> >>> What is causing this error, and is there anything I can do about it? If >>> it simply reflects a problem with the reports as written, then I will have >>> to live with it. >>> >>> >>> Regards, >>> Matt >>> >>> >>> ------------------------------------------------------------------------------ >>> EditLive Enterprise is the world's most technically advanced content >>> authoring tool. Experience the power of Track Changes, Inline Image >>> Editing and ensure content is compliant with Accessibility Checking. >>> http://p.sf.net/sfu/ephox-dev2dev >>> _______________________________________________ >>> Xbrlapi-developer mailing list >>> Xbr...@li... >>> https://lists.sourceforge.net/lists/listinfo/xbrlapi-developer >>> >>> >> > ------------------------------------------------------------------------------ EditLive Enterprise is the world's most technically advanced content authoring tool. Experience the power of Track Changes, Inline Image Editing and ensure content is compliant with Accessibility Checking. http://p.sf.net/sfu/ephox-dev2dev _______________________________________________ Xbrlapi-developer mailing list Xbr...@li... https://lists.sourceforge.net/lists/listinfo/xbrlapi-developer |
From: Matthew D. <ro...@gm...> - 2011-06-17 13:16:02
|
Hi Geoff, Thanks for the attention to this. I did not know enough about the schema location attribute to look closely for it, but I see it now in comparing the DAR instance to another instance. It is good to know that the schemaRef element is not recognizable to Xerces. I will verify that the information I want is being entered into the data store, but from your response below, it doesn't look like this is going to be a big deal for me. I will take a deeper dive into the aspect package. Right now, dimensions and aspects look a little intimidating, but they are too good to pass up. Regards, Matt On Thu, Jun 16, 2011 at 8:01 PM, Geoff Shuetrim <ge...@ga...> wrote: > Matthew > > Looking at the example XBRL instance ( > http://www.sec.gov/Archives/edgar/data/916540/000091654011000018/dar-20110615.xml) > the problem is straightforward - there is no use of the schema location > attribute at all. In the previous version of the XBRLAPI, this would have > thrown a validation error much earlier but now, I am caching the official > XBRL schemas in the grammar pool before doing any SAX parsing of XBRL > instances. This makes sure that the XBRL specification defined elements are > validated using XML schema, even if nothing else is. The instance is > clearly expecting the XBRL schemaRef element to be sufficient for the > processor to determine what schema to use for validation of the instance but > the schemaRef element and its semantics are not understood by the Xerces > parser. In a future version of the XBRLAPI, I may include a preprocessing > step that would scan documents for such ref elements to build up the grammar > pool in advance of XML Schema validation but that is not a high priority for > me at the moment. It adds to the processing time and is still not going to > lead to full XBRL validation. > > The reason to be careful with this is that, without XML schema validation, > things like XML Schema default values for elements and attributes are not > going to be added to the post validation infoset. Such defaults are used > pretty rarely in XBRL - because their usage if fraught with problems like > this one. If you want to be sure, try XQuerying the data store for usage of > default attributes and fixed attributes in the fragments that extend the > XMLSchemaContent class. That is about all I have to suggest at this stage. > > Regards > > Geoff Shuetrim > > > On 17 June 2011 08:03, Geoff Shuetrim <ge...@ga...> wrote: > >> I have recently made changes that make the XBRLAPI more demanding at the >> XML Schema validation stage, at least in terms of what it checks and >> reports. The changes do not alter what is actually being loaded into the >> data store but they do let you know what was and was not validated using >> Xerces XML Schema validation. >> >> The kind of error being found by you at the schema validation stage has >> been turning up for me in three difference circumstances: >> >> 1. When the xsi:schemaLocation attribute is not providing enough >> information to find all of the schemas required to do schema validation; and >> >> 2. When more than one schema has the same target namespace - such as >> occurs in the XBRL 2.1 conformance test suite. >> >> 3. When there is an XML Schema validity issue in the file being parsed. >> >> I am not sure what the right step to take regarding 1 is (perhaps it is to >> do DTS discovery first, find all relevant schemas, and then to do XML schema >> validation but that seems to be putting the cart before the horse a bit.) >> but for 2, the problem used to arise for me because the XML Schema grammar >> pool caches the first schema it encounters for the namespace and then >> continues using it without augmenting that schema grammar with information >> from other schemas with the same target namespace. I thought I had fixed 2 >> by locking the grammar pool after adding just the main xbrl XML Schemas but >> perhaps that was not sufficient. I am guessing I can ignore 3. >> >> I will take a look at the example files you provided links to and see if >> we have a new scenario in which this issue arises. In the meantime, the >> files should be loading into the data store OK so long as they are XBRL >> valid. The XBRLAPI is designed to be as robust to things like this as >> possible, kind of like a web browser is to wierd HTML markup. >> >> Regards >> >> Geoff S >> >> On 17 June 2011 06:29, Matthew DeAngelis <ro...@gm...> wrote: >> >>> Hi all (and especially Geoff): >>> >>> I am running the LoadAllSECFilings example (on both the provided RSS feed >>> and http://www.sec.gov/Archives/edgar/usgaap.rss.xml). While the loader >>> threads are running, I regularly get errors of the form below: >>> >>> ERROR BaseContentHandlerImpl.java 128 [error] - :cvc-complex-type.2.4.a: >>> Invalid content was found starting with element >>> 'dar:DecreaseInLongTermPensionLiability'. One of '{" >>> http://www.xbrl.org/2003/instance":item, " >>> http://www.xbrl.org/2003/instance":tuple, " >>> http://www.xbrl.org/2003/instance":context, " >>> http://www.xbrl.org/2003/instance":unit, " >>> http://www.xbrl.org/2003/linkbase":footnoteLink}' is expected.: on line >>> number 479 >>> >>> From reading the documentation, I gather that this is due to the element >>> not being located in the schema information provided in the instance. >>> However, in the above case (instance document: >>> http://www.sec.gov/Archives/edgar/data/916540/000091654011000018/dar-20110615.xml, >>> schema document: >>> http://www.sec.gov/Archives/edgar/data/916540/000091654011000018/dar-20110615.xsd), >>> the element is defined, not in the standard schema, but in the .xsd file. >>> This definition does not appear to be malformed. >>> >>> Since the API is also pulling the .xsd file, and it seems to recognize >>> other non-standard elements, I am not sure why this error is occurring. >>> Some of these errors are on US GAAP elements as well. I am uncomfortable >>> with the idea that one, seemingly random, element, may be missing from every >>> third or fourth XBRL report, so I would like to correct this if possible >>> (before I start manipulating the data). >>> >>> What is causing this error, and is there anything I can do about it? If >>> it simply reflects a problem with the reports as written, then I will have >>> to live with it. >>> >>> >>> Regards, >>> Matt >>> >>> >>> ------------------------------------------------------------------------------ >>> EditLive Enterprise is the world's most technically advanced content >>> authoring tool. Experience the power of Track Changes, Inline Image >>> Editing and ensure content is compliant with Accessibility Checking. >>> http://p.sf.net/sfu/ephox-dev2dev >>> _______________________________________________ >>> Xbrlapi-developer mailing list >>> Xbr...@li... >>> https://lists.sourceforge.net/lists/listinfo/xbrlapi-developer >>> >>> >> > |
From: Geoff S. <ge...@ga...> - 2011-06-17 00:26:50
|
Matthew Looking at the example XBRL instance ( http://www.sec.gov/Archives/edgar/data/916540/000091654011000018/dar-20110615.xml) the problem is straightforward - there is no use of the schema location attribute at all. In the previous version of the XBRLAPI, this would have thrown a validation error much earlier but now, I am caching the official XBRL schemas in the grammar pool before doing any SAX parsing of XBRL instances. This makes sure that the XBRL specification defined elements are validated using XML schema, even if nothing else is. The instance is clearly expecting the XBRL schemaRef element to be sufficient for the processor to determine what schema to use for validation of the instance but the schemaRef element and its semantics are not understood by the Xerces parser. In a future version of the XBRLAPI, I may include a preprocessing step that would scan documents for such ref elements to build up the grammar pool in advance of XML Schema validation but that is not a high priority for me at the moment. It adds to the processing time and is still not going to lead to full XBRL validation. The reason to be careful with this is that, without XML schema validation, things like XML Schema default values for elements and attributes are not going to be added to the post validation infoset. Such defaults are used pretty rarely in XBRL - because their usage if fraught with problems like this one. If you want to be sure, try XQuerying the data store for usage of default attributes and fixed attributes in the fragments that extend the XMLSchemaContent class. That is about all I have to suggest at this stage. Regards Geoff Shuetrim On 17 June 2011 08:03, Geoff Shuetrim <ge...@ga...> wrote: > I have recently made changes that make the XBRLAPI more demanding at the > XML Schema validation stage, at least in terms of what it checks and > reports. The changes do not alter what is actually being loaded into the > data store but they do let you know what was and was not validated using > Xerces XML Schema validation. > > The kind of error being found by you at the schema validation stage has > been turning up for me in three difference circumstances: > > 1. When the xsi:schemaLocation attribute is not providing enough > information to find all of the schemas required to do schema validation; and > > 2. When more than one schema has the same target namespace - such as occurs > in the XBRL 2.1 conformance test suite. > > 3. When there is an XML Schema validity issue in the file being parsed. > > I am not sure what the right step to take regarding 1 is (perhaps it is to > do DTS discovery first, find all relevant schemas, and then to do XML schema > validation but that seems to be putting the cart before the horse a bit.) > but for 2, the problem used to arise for me because the XML Schema grammar > pool caches the first schema it encounters for the namespace and then > continues using it without augmenting that schema grammar with information > from other schemas with the same target namespace. I thought I had fixed 2 > by locking the grammar pool after adding just the main xbrl XML Schemas but > perhaps that was not sufficient. I am guessing I can ignore 3. > > I will take a look at the example files you provided links to and see if we > have a new scenario in which this issue arises. In the meantime, the files > should be loading into the data store OK so long as they are XBRL valid. > The XBRLAPI is designed to be as robust to things like this as possible, > kind of like a web browser is to wierd HTML markup. > > Regards > > Geoff S > > On 17 June 2011 06:29, Matthew DeAngelis <ro...@gm...> wrote: > >> Hi all (and especially Geoff): >> >> I am running the LoadAllSECFilings example (on both the provided RSS feed >> and http://www.sec.gov/Archives/edgar/usgaap.rss.xml). While the loader >> threads are running, I regularly get errors of the form below: >> >> ERROR BaseContentHandlerImpl.java 128 [error] - :cvc-complex-type.2.4.a: >> Invalid content was found starting with element >> 'dar:DecreaseInLongTermPensionLiability'. One of '{" >> http://www.xbrl.org/2003/instance":item, " >> http://www.xbrl.org/2003/instance":tuple, " >> http://www.xbrl.org/2003/instance":context, " >> http://www.xbrl.org/2003/instance":unit, " >> http://www.xbrl.org/2003/linkbase":footnoteLink}' is expected.: on line >> number 479 >> >> From reading the documentation, I gather that this is due to the element >> not being located in the schema information provided in the instance. >> However, in the above case (instance document: >> http://www.sec.gov/Archives/edgar/data/916540/000091654011000018/dar-20110615.xml, >> schema document: >> http://www.sec.gov/Archives/edgar/data/916540/000091654011000018/dar-20110615.xsd), >> the element is defined, not in the standard schema, but in the .xsd file. >> This definition does not appear to be malformed. >> >> Since the API is also pulling the .xsd file, and it seems to recognize >> other non-standard elements, I am not sure why this error is occurring. >> Some of these errors are on US GAAP elements as well. I am uncomfortable >> with the idea that one, seemingly random, element, may be missing from every >> third or fourth XBRL report, so I would like to correct this if possible >> (before I start manipulating the data). >> >> What is causing this error, and is there anything I can do about it? If >> it simply reflects a problem with the reports as written, then I will have >> to live with it. >> >> >> Regards, >> Matt >> >> >> ------------------------------------------------------------------------------ >> EditLive Enterprise is the world's most technically advanced content >> authoring tool. Experience the power of Track Changes, Inline Image >> Editing and ensure content is compliant with Accessibility Checking. >> http://p.sf.net/sfu/ephox-dev2dev >> _______________________________________________ >> Xbrlapi-developer mailing list >> Xbr...@li... >> https://lists.sourceforge.net/lists/listinfo/xbrlapi-developer >> >> > |
From: Matthew D. <ro...@gm...> - 2011-06-16 22:27:21
|
Hi Geoff, Thanks for the quick response. It is good to know that the data should still be loaded into the data store; if that is the case (and I can check for that once the data is loaded and I figure out how to access the relevant information), this is not an especially big problem for me. However, I would be interested to know what your exploration turns up. Many thanks for this API: it seems to be the best OSS XBRL implementation out there, and I am only just beginning to appreciate the functionality. This is my first time programming with Java, so it is an adventure, but I am enjoying walking through all of the classes and methods to see what I can find and how things can be analyzed and grouped. Regards, Matt On Thu, Jun 16, 2011 at 6:03 PM, Geoff Shuetrim <ge...@ga...> wrote: > I have recently made changes that make the XBRLAPI more demanding at the > XML Schema validation stage, at least in terms of what it checks and > reports. The changes do not alter what is actually being loaded into the > data store but they do let you know what was and was not validated using > Xerces XML Schema validation. > > The kind of error being found by you at the schema validation stage has > been turning up for me in three difference circumstances: > > 1. When the xsi:schemaLocation attribute is not providing enough > information to find all of the schemas required to do schema validation; and > > 2. When more than one schema has the same target namespace - such as occurs > in the XBRL 2.1 conformance test suite. > > 3. When there is an XML Schema validity issue in the file being parsed. > > I am not sure what the right step to take regarding 1 is (perhaps it is to > do DTS discovery first, find all relevant schemas, and then to do XML schema > validation but that seems to be putting the cart before the horse a bit.) > but for 2, the problem used to arise for me because the XML Schema grammar > pool caches the first schema it encounters for the namespace and then > continues using it without augmenting that schema grammar with information > from other schemas with the same target namespace. I thought I had fixed 2 > by locking the grammar pool after adding just the main xbrl XML Schemas but > perhaps that was not sufficient. I am guessing I can ignore 3. > > I will take a look at the example files you provided links to and see if we > have a new scenario in which this issue arises. In the meantime, the files > should be loading into the data store OK so long as they are XBRL valid. > The XBRLAPI is designed to be as robust to things like this as possible, > kind of like a web browser is to wierd HTML markup. > > Regards > > Geoff S > > On 17 June 2011 06:29, Matthew DeAngelis <ro...@gm...> wrote: > >> Hi all (and especially Geoff): >> >> I am running the LoadAllSECFilings example (on both the provided RSS feed >> and http://www.sec.gov/Archives/edgar/usgaap.rss.xml). While the loader >> threads are running, I regularly get errors of the form below: >> >> ERROR BaseContentHandlerImpl.java 128 [error] - :cvc-complex-type.2.4.a: >> Invalid content was found starting with element >> 'dar:DecreaseInLongTermPensionLiability'. One of '{" >> http://www.xbrl.org/2003/instance":item, " >> http://www.xbrl.org/2003/instance":tuple, " >> http://www.xbrl.org/2003/instance":context, " >> http://www.xbrl.org/2003/instance":unit, " >> http://www.xbrl.org/2003/linkbase":footnoteLink}' is expected.: on line >> number 479 >> >> From reading the documentation, I gather that this is due to the element >> not being located in the schema information provided in the instance. >> However, in the above case (instance document: >> http://www.sec.gov/Archives/edgar/data/916540/000091654011000018/dar-20110615.xml, >> schema document: >> http://www.sec.gov/Archives/edgar/data/916540/000091654011000018/dar-20110615.xsd), >> the element is defined, not in the standard schema, but in the .xsd file. >> This definition does not appear to be malformed. >> >> Since the API is also pulling the .xsd file, and it seems to recognize >> other non-standard elements, I am not sure why this error is occurring. >> Some of these errors are on US GAAP elements as well. I am uncomfortable >> with the idea that one, seemingly random, element, may be missing from every >> third or fourth XBRL report, so I would like to correct this if possible >> (before I start manipulating the data). >> >> What is causing this error, and is there anything I can do about it? If >> it simply reflects a problem with the reports as written, then I will have >> to live with it. >> >> >> Regards, >> Matt >> >> >> ------------------------------------------------------------------------------ >> EditLive Enterprise is the world's most technically advanced content >> authoring tool. Experience the power of Track Changes, Inline Image >> Editing and ensure content is compliant with Accessibility Checking. >> http://p.sf.net/sfu/ephox-dev2dev >> _______________________________________________ >> Xbrlapi-developer mailing list >> Xbr...@li... >> https://lists.sourceforge.net/lists/listinfo/xbrlapi-developer >> >> > |
From: Geoff S. <ge...@ga...> - 2011-06-16 22:03:34
|
I have recently made changes that make the XBRLAPI more demanding at the XML Schema validation stage, at least in terms of what it checks and reports. The changes do not alter what is actually being loaded into the data store but they do let you know what was and was not validated using Xerces XML Schema validation. The kind of error being found by you at the schema validation stage has been turning up for me in three difference circumstances: 1. When the xsi:schemaLocation attribute is not providing enough information to find all of the schemas required to do schema validation; and 2. When more than one schema has the same target namespace - such as occurs in the XBRL 2.1 conformance test suite. 3. When there is an XML Schema validity issue in the file being parsed. I am not sure what the right step to take regarding 1 is (perhaps it is to do DTS discovery first, find all relevant schemas, and then to do XML schema validation but that seems to be putting the cart before the horse a bit.) but for 2, the problem used to arise for me because the XML Schema grammar pool caches the first schema it encounters for the namespace and then continues using it without augmenting that schema grammar with information from other schemas with the same target namespace. I thought I had fixed 2 by locking the grammar pool after adding just the main xbrl XML Schemas but perhaps that was not sufficient. I am guessing I can ignore 3. I will take a look at the example files you provided links to and see if we have a new scenario in which this issue arises. In the meantime, the files should be loading into the data store OK so long as they are XBRL valid. The XBRLAPI is designed to be as robust to things like this as possible, kind of like a web browser is to wierd HTML markup. Regards Geoff S On 17 June 2011 06:29, Matthew DeAngelis <ro...@gm...> wrote: > Hi all (and especially Geoff): > > I am running the LoadAllSECFilings example (on both the provided RSS feed > and http://www.sec.gov/Archives/edgar/usgaap.rss.xml). While the loader > threads are running, I regularly get errors of the form below: > > ERROR BaseContentHandlerImpl.java 128 [error] - :cvc-complex-type.2.4.a: > Invalid content was found starting with element > 'dar:DecreaseInLongTermPensionLiability'. One of '{" > http://www.xbrl.org/2003/instance":item, " > http://www.xbrl.org/2003/instance":tuple, " > http://www.xbrl.org/2003/instance":context, " > http://www.xbrl.org/2003/instance":unit, " > http://www.xbrl.org/2003/linkbase":footnoteLink}' is expected.: on line > number 479 > > From reading the documentation, I gather that this is due to the element > not being located in the schema information provided in the instance. > However, in the above case (instance document: > http://www.sec.gov/Archives/edgar/data/916540/000091654011000018/dar-20110615.xml, > schema document: > http://www.sec.gov/Archives/edgar/data/916540/000091654011000018/dar-20110615.xsd), > the element is defined, not in the standard schema, but in the .xsd file. > This definition does not appear to be malformed. > > Since the API is also pulling the .xsd file, and it seems to recognize > other non-standard elements, I am not sure why this error is occurring. > Some of these errors are on US GAAP elements as well. I am uncomfortable > with the idea that one, seemingly random, element, may be missing from every > third or fourth XBRL report, so I would like to correct this if possible > (before I start manipulating the data). > > What is causing this error, and is there anything I can do about it? If it > simply reflects a problem with the reports as written, then I will have to > live with it. > > > Regards, > Matt > > > ------------------------------------------------------------------------------ > EditLive Enterprise is the world's most technically advanced content > authoring tool. Experience the power of Track Changes, Inline Image > Editing and ensure content is compliant with Accessibility Checking. > http://p.sf.net/sfu/ephox-dev2dev > _______________________________________________ > Xbrlapi-developer mailing list > Xbr...@li... > https://lists.sourceforge.net/lists/listinfo/xbrlapi-developer > > |