FIXME cannot write large index yet, see Database javadoc for info on enabling
large index support
if i delete the access db and import the same data it works fine to new access
db file , and i am not adding large number of records to get into large index
problem
any ideas of where i can
Regards
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Access database indexes have 2 "phases": small "single page", and large "multi
page". originally, jackcess only supported the former. it now supports the
latter, but up until the most recent release support for the "multi page"
indexes was disabled by default as it was still experimental. it sounds like
you need to enable large index support. if you use the latest jackcess
release, large index support is enabled by default so you don't need to do
anything special (as the "multi page" support is no longer experimental).
indexes can be "large" due to the type of data written in addition to the
number of records written.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
We are getting the same error when updating some columns.
The error happens when there is the need to create a new data page, updating
the _globalUsageMap.
We have checked out the code and removed the throwing of the Exception and
everything works as intended.
Maybe you could put an option to ignore UsageMap errors to be able to work
with partly corrupted mdb files?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
ignoring that exception will just cause further corruption, because it
indicates that you are attempting to write to a page which is already in use.
it is always possible that there is a bug in jackcess. if you have a database
which exhibits the problem, you could attach it to a bug report. if the
situation is really related to the creation of new data pages (i.e. pages
which are outside the bounds of the current file), then it is possible that
the exception could be reasonably ignored.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The mdb file is 50MB. I've deleted some tables opening the file whit
OpenOffice Base but the size remains the same.
If we use MS Access to compress the file the error disappears.
Do you know any other ways to reduce the file size?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
It is very hard to reduce the size of the file without re-writing all the
data. that is why the files don't shrink unless you run a compact/repair
operation (which rewrites all the data and removes the "holes").
like i said, it is very hard for me to figure out the issue without an example
db. can you at least post a stack trace?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
java.io.IOException: Page number 14652 already removed from usage map, expected range 14496 to 28992
at com.healthmarketscience.jackcess.UsageMap.updateMap(UsageMap.java:329)
at com.healthmarketscience.jackcess.UsageMap$InlineHandler.addOrRemovePageNumber(UsageMap.java:481)
at com.healthmarketscience.jackcess.UsageMap.removePageNumber(UsageMap.java:312)
at com.healthmarketscience.jackcess.PageChannel.writeNewPage(PageChannel.java:233)
at com.healthmarketscience.jackcess.PageChannel.allocateNewPage(PageChannel.java:243)
at com.healthmarketscience.jackcess.TempPageHolder.setNewPage(TempPageHolder.java:115)
at com.healthmarketscience.jackcess.Column.getLongValuePage(Column.java:1044)
at com.healthmarketscience.jackcess.Column.writeLongValue(Column.java:923)
at com.healthmarketscience.jackcess.Column.write(Column.java:1126)
at com.healthmarketscience.jackcess.Column.write(Column.java:1057)
at com.healthmarketscience.jackcess.Table.createRow(Table.java:1666)
at com.healthmarketscience.jackcess.Table.updateRow(Table.java:1390)
at com.healthmarketscience.jackcess.Cursor.setCurrentRowValue(Cursor.java:904)
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
okay, i made some changes in trunk which should mitigate certain usagemap
inconsistencies. in the case where you are using a known free page (because it
was just added to the end of the file), an exception will no longer be thrown
if the usagemap has bad info. this will be in the 1.2.2 release.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
i am getting this error randomly
Page number 1537 already removed from usage map, expected range 0 to 556512
anybody know why
Regards
It means either your access file is corrupted or there is a bug in jackcess.
i think the above error is caused by this error
FIXME cannot write large index yet, see Database javadoc for info on enabling
large index support
if i delete the access db and import the same data it works fine to new access
db file , and i am not adding large number of records to get into large index
problem
any ideas of where i can
Regards
Access database indexes have 2 "phases": small "single page", and large "multi
page". originally, jackcess only supported the former. it now supports the
latter, but up until the most recent release support for the "multi page"
indexes was disabled by default as it was still experimental. it sounds like
you need to enable large index support. if you use the latest jackcess
release, large index support is enabled by default so you don't need to do
anything special (as the "multi page" support is no longer experimental).
indexes can be "large" due to the type of data written in addition to the
number of records written.
Thanks for the update , i have enabled large index support . hopefully that
fix my problems
Regards
We are getting the same error when updating some columns.
The error happens when there is the need to create a new data page, updating
the _globalUsageMap.
We have checked out the code and removed the throwing of the Exception and
everything works as intended.
Maybe you could put an option to ignore UsageMap errors to be able to work
with partly corrupted mdb files?
ignoring that exception will just cause further corruption, because it
indicates that you are attempting to write to a page which is already in use.
it is always possible that there is a bug in jackcess. if you have a database
which exhibits the problem, you could attach it to a bug report. if the
situation is really related to the creation of new data pages (i.e. pages
which are outside the bounds of the current file), then it is possible that
the exception could be reasonably ignored.
The mdb file is 50MB. I've deleted some tables opening the file whit
OpenOffice Base but the size remains the same.
If we use MS Access to compress the file the error disappears.
Do you know any other ways to reduce the file size?
It is very hard to reduce the size of the file without re-writing all the
data. that is why the files don't shrink unless you run a compact/repair
operation (which rewrites all the data and removes the "holes").
like i said, it is very hard for me to figure out the issue without an example
db. can you at least post a stack trace?
note, if you are worried about the data in the file, you could email me the
file directly.
I'm currently uploading the file. I'll send you a link later.
I've written a test code that causes the Exception:
The stack trace is:
great, thanks!
Any updates on this?
I've sent an email with a link to the mdb file to
jahlborn@users.sourceforge.net six days ago.
Can you download it from there?
the email never arrived. please resend.
I've just sent it again from the sourceforge "send me a message" page (https:
//sourceforge.net/sendmessage.php?touser=1032055).
The subject is "mdb with UsageMap bug"
ah, found it in my spam folder. got it this time.
okay, i made some changes in trunk which should mitigate certain usagemap
inconsistencies. in the case where you are using a known free page (because it
was just added to the end of the file), an exception will no longer be thrown
if the usagemap has bad info. this will be in the 1.2.2 release.
Thanks a lot for the fix!