This list is closed, nobody may subscribe to it.
2000 |
Jan
|
Feb
(1) |
Mar
(53) |
Apr
(28) |
May
(5) |
Jun
(7) |
Jul
(16) |
Aug
(15) |
Sep
(10) |
Oct
(1) |
Nov
|
Dec
(1) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(9) |
Feb
(7) |
Mar
(1) |
Apr
(7) |
May
(6) |
Jun
|
Jul
(15) |
Aug
(10) |
Sep
(2) |
Oct
(12) |
Nov
(3) |
Dec
(2) |
2002 |
Jan
(2) |
Feb
(12) |
Mar
(33) |
Apr
(30) |
May
(5) |
Jun
(18) |
Jul
(18) |
Aug
(47) |
Sep
(8) |
Oct
(7) |
Nov
(8) |
Dec
(13) |
2003 |
Jan
(48) |
Feb
(8) |
Mar
(10) |
Apr
(30) |
May
(6) |
Jun
(8) |
Jul
(19) |
Aug
(36) |
Sep
(19) |
Oct
(16) |
Nov
(11) |
Dec
(17) |
2004 |
Jan
(11) |
Feb
(22) |
Mar
(52) |
Apr
(45) |
May
(18) |
Jun
(72) |
Jul
(14) |
Aug
(31) |
Sep
(19) |
Oct
(27) |
Nov
(19) |
Dec
(25) |
2005 |
Jan
(16) |
Feb
(46) |
Mar
(50) |
Apr
(3) |
May
(21) |
Jun
(3) |
Jul
(24) |
Aug
(33) |
Sep
(25) |
Oct
(23) |
Nov
(30) |
Dec
(20) |
2006 |
Jan
(12) |
Feb
(11) |
Mar
(8) |
Apr
(15) |
May
(27) |
Jun
(15) |
Jul
(19) |
Aug
(5) |
Sep
(9) |
Oct
(1) |
Nov
(2) |
Dec
(3) |
2007 |
Jan
|
Feb
(3) |
Mar
(18) |
Apr
(5) |
May
(9) |
Jun
|
Jul
(10) |
Aug
(3) |
Sep
(8) |
Oct
(1) |
Nov
(7) |
Dec
(9) |
2008 |
Jan
(2) |
Feb
|
Mar
(10) |
Apr
(4) |
May
|
Jun
(5) |
Jul
(9) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(8) |
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(11) |
Nov
(1) |
Dec
(20) |
2010 |
Jan
|
Feb
(2) |
Mar
|
Apr
(7) |
May
|
Jun
(23) |
Jul
(3) |
Aug
(6) |
Sep
(1) |
Oct
(4) |
Nov
(1) |
Dec
|
2011 |
Jan
(1) |
Feb
(26) |
Mar
(25) |
Apr
(11) |
May
(5) |
Jun
(5) |
Jul
(2) |
Aug
(39) |
Sep
(12) |
Oct
(6) |
Nov
|
Dec
|
2012 |
Jan
(19) |
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
(7) |
Jul
|
Aug
(8) |
Sep
|
Oct
(3) |
Nov
(2) |
Dec
(3) |
2013 |
Jan
(6) |
Feb
|
Mar
(1) |
Apr
|
May
(7) |
Jun
(5) |
Jul
(2) |
Aug
|
Sep
(1) |
Oct
(2) |
Nov
|
Dec
|
2014 |
Jan
(4) |
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2016 |
Jan
(5) |
Feb
|
Mar
(1) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
From: Martin E. <m.a...@nc...> - 2005-10-17 09:14:45
|
On Monday 17 Oct 2005 09:59, Nicel KM wrote: > I work with OpenOffice.org, currently working on getting the > mdbtools based db driver integrated into OpenOffice.org. We had taken > 0.6pre1 release (the one which is published) for the integration and it > works fine with a patch (bug#1261919 - fixed in cvs). We have a couple > of other issues - one crasher while using a large mdb file - which again > seems to be working fine with the cvs snapshot of mdbtools. Likewise, I'd like to see another release (of some form) as, for security reasons, I'm keen to have the mdb support in Kexi build against a 'system' copy of mdbtools. > The release that we have 0.6pre1 is dated June 18, 2004 and I believe a > significant amount of effort and features/fixes would have gone into it > after wards. It'd be great to have a 0.6pre2 release with the latest > fixes. Is that a possibility? Is it possible to use a different numbering system? Mixing numbers and letters in a version number without separating dots totally confuses pkg-config (if you look in the code, the comment actually says that it will behave arbitrarily in this case). Cheers Martin |
From: Nicel KM <mn...@no...> - 2005-10-17 09:04:22
|
Hi there, I work with OpenOffice.org, currently working on getting the mdbtools based db driver integrated into OpenOffice.org. We had taken 0.6pre1 release (the one which is published) for the integration and it works fine with a patch (bug#1261919 - fixed in cvs). We have a couple of other issues - one crasher while using a large mdb file - which again seems to be working fine with the cvs snapshot of mdbtools. Now with these issues fixed in cvs we were thinking about getting the latest cvs snapshot of mdbtools for integrating into OOo. But then, thought it may not be a good idea to take the work-in-progress cvs snapshot and find issues in. So, just wanted to ask you about the possibility of another pre release 0.6pre2 or some such which we can take to integrate into OOo ? The release that we have 0.6pre1 is dated June 18, 2004 and I believe a significant amount of effort and features/fixes would have gone into it after wards. It'd be great to have a 0.6pre2 release with the latest fixes. Is that a possibility? thanks in advance, -Nicel. |
From: Andreas R. <mdb...@ri...> - 2005-10-14 00:30:12
|
Actually the code I sent is code using the library. Not the applications, but the library. You can just write some code and use the library. If you were planning to incorporate this into the export command then there is work to be done. Thanks, Andy > From: Shanon Mulley <sha...@gm...> > Date: Fri, 14 Oct 2005 08:53:28 +1000 > To: Andreas Richter <an...@ri...> > Cc: Andreas Richter <mdb...@ri...>, "Brian A. Seklecki" > <lav...@sp...>, Sam Moffatt <pa...@gm...>, > <mdb...@li...> > Subject: Re: [mdb-dev] Success - kind of... > > Andreas, > > Thanks for that snippet. > > I did a grep for mdb_read_ole in my source (Well, the whole mdbtools > directory), and found nothing. This is with the 0.6pre (from the cvs > or csv or whatever). > > And with this snippet you have given me, I'm assuming this is used > before compiling mdbtools, to modify the source code. I'm not really > that experienced with tooling around with source code like this (I > work with a windows 4GL package). Can you give me a hint of what I'm > changing? > > Thanks. > > On 10/4/05, Andreas Richter <an...@ri...> wrote: >> Here is the promised snippet: >> >> MdbHandle* mdb = mdb_open("<somepath>", MDB_TABLE); >> mdb_read_catalog(mdb, MDB_TABLE); >> MdbTableDef *table = mdb_read_table_by_name(mdb, "<sometablewithblob>", >> MDB_TABLE); >> mdb_read_columns(table); >> mdb_read_indices(table); >> mdb_rewind_table(table); >> >> int blobLen = MDB_BIND_SIZE; >> gchar blobChunk[MDB_BIND_SIZE]; >> int colID = mdb_bind_column_by_name(table, "<blobcolumn>", blobChunk, >> &blobLen); >> MdbColumn * blobCol = g_ptr_array_index(table->columns, colID - 1); >> if (mdb_fetch_row(table)) >> { >> gchar binder[MDB_MEMO_OVERHEAD]; >> memcpy(binder, blobChunk, MDB_MEMO_OVERHEAD); >> int rlen; >> if (rlen = mdb_ole_read(mdb, blobCol, binder, MDB_BIND_SIZE)) >> { >> char* mem = malloc(rlen); >> int pos = 0; >> memcpy(&mem[pos], blobChunk, rlen); >> pos += rlen; >> >> while (rlen = mdb_ole_read_next(mdb, blobCol, binder)) >> { >> mem = realloc(mem, pos + rlen); >> memcpy(&mem[pos], blobChunk, rlen); >> pos += rlen; >> } >> // Use the content of mem here. >> // The length of the data is located >> // in the pos variable at this point. >> free(mem); >> } >> } >> >> >> >>> From: Shanon Mulley <sha...@gm...> >>> Reply-To: Shanon Mulley <sha...@gm...> >>> Date: Tue, 4 Oct 2005 10:28:04 +1000 >>> To: Andreas Richter <mdb...@ri...> >>> Cc: "Brian A. Seklecki" <lav...@sp...>, Sam Moffatt >>> <pa...@gm...>, <mdb...@li...> >>> Subject: Re: [mdb-dev] Success - kind of... >>> >>> So, is this to say that none of the standard mdbtools does this (like >>> mdb-export, or mdb-sql) - I have to play around with the source? >>> >>> I'm looking forward to seeing your snippet of code, although I'm >>> thinking it might break my brain getting it to work. I'm a recent >>> convert to linux, and so far have avoided playing around with source >>> code, but I'm willing to give it a go if need be. >>> >>> On 10/3/05, Andreas Richter <mdb...@ri...> wrote: >>>> I can find some of the code, but don't have access from it at my current >>>> location... >>>> Basically mdb_bind_column to a buffer of size MDB_BIND_SIZE >>>> Then after fetching the row. Copy this data into a size MDB_MEMO_OVERHEAD >>>> and use the new buffer in the call to mdb_read_ole. This call will then >>>> copy >>>> the data into the original bound buffer and return the size for you. Copy >>>> that data into yet another buffer. Then call mdb_read_ole_next to see if >>>> there is more data. If there is it will again be copied to the bound >>>> buffer. >>>> You can append it to the data you already stored away. Then call >>>> mdb_read_ole_next again until it returns 0. I'll paste some code here when >>>> I >>>> get home tonight. >>>> There is one little snippet of code in the sources that also uses >>>> mdb_read_ole. That's where I figured out how to use this. Just grep for >>>> mdb_read_ole in all the sources. >>>> Thanks, >>>> Andy >>>> ----- Original Message ----- >>>> From: "Shanon Mulley" <sha...@gm...> >>>> To: "Brian A. Seklecki" <lav...@sp...> >>>> Cc: "Andreas Richter" <mdb...@ri...>; "Sam Moffatt" >>>> <pa...@gm...>; <mdb...@li...> >>>> Sent: Monday, October 03, 2005 8:23 AM >>>> Subject: Re: [mdb-dev] Success - kind of... >>>> >>>> >>>> Andreas, >>>> >>>> You seem to be having more success with reading binary data from >>>> access then myself. I'm wondering if you can give me a few pointers >>>> with getting binary data out. I've been teaching myself how to use >>>> mdbtools, but am yet to work this out. You mention a command >>>> "mdb_read_ole". Can you tell me where this is used? >>>> >>>> I'm using the latest version (from cvs), but didnt enable odbc - that >>>> wouldnt make properly. >>>> >>>> Thanks. >>>> >>>> On 10/3/05, Brian A. Seklecki <lav...@sp...> wrote: >>>>> Specifically which versions of the autotools did you use, just out of >>>>> curiosity? >>>>> >>>>> ~BAS >>>>> >>>>> On Fri, 2005-09-30 at 20:59, Andreas Richter wrote: >>>>>> Hi, >>>>>> After beating autoconf, libtool, autogen and so on into pulp I have the >>>>>> CVS >>>>>> version running. It can correctly read the blobs with mdb_read_ole!!! >>>>>> Yes. >>>>>> However blobs that are longer than will cause an segment fault in >>>>>> mdb_read_ole_next. The problem seems to be that the length calculation >>>>>> is >>>>>> wrong. The following code in mdb_find_pg_row comes back with a len of >>>>>> 0xfffffb9b i.e. A negative number. >>>>>> >>>>>> mdb_swap_pgbuf(mdb); >>>>>> mdb_find_row(mdb, row, off, len); >>>>>> mdb_swap_pgbuf(mdb); >>>>>> >>>>>> In mdb_find_row the next_start is 4096 even though I am not loading the >>>>>> blob >>>>>> from the first row. By testing thing I think I figured out that >>>>>> sometimes >>>>>> blobs come with extensions, but the pg_row for the extension (i.e. The >>>>>> next >>>>>> pointer) is zero. Therefore making the following change in >>>>>> mdb_read_ole_next >>>>>> seemed to fix the crash (line 460) >>>>>> >>>>>> if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, >>>>>> &buf, &row_start, &len)) { >>>>>> return 0; >>>>>> } >>>>>> >>>>>> To >>>>>> >>>>>> if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, >>>>>> col->cur_blob_pg_row, >>>>>> &buf, &row_start, &len)) { >>>>>> return 0; >>>>>> } >>>>>> >>>>>> BTW: The CVS version of mdbtools.h has an #include <config.h> which >>>>>> probably >>>>>> won't work from a normal include directory. >>>>>> Thanks, >>>>>> Andy >>>>>> >>>>>>> From: Andreas Richter <mdb...@ri...> >>>>>>> Date: Fri, 30 Sep 2005 11:24:31 -0400 >>>>>>> To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' >>>>>>> <mdb...@ri...> >>>>>>> Cc: <mdb...@li...> >>>>>>> Subject: RE: [mdb-dev] Problem reading blobs >>>>>>> >>>>>>> No success building CVS dump under Darwin. The 0.6pre tar file did >>>>>>> work >>>>>>> correctly under a fink installation, but the autogen.sh file for the >>>>>>> CVS >>>>>>> dump did not work. Has anyone done this yet? I'll probably just >>>>>>> execute the >>>>>>> autogen.sh under linux and then move the files to Darwin. Any >>>>>>> suggestions >>>>>>> would be great. >>>>>>> Thanks, >>>>>>> Andy >>>>>>> >>>>>>> -----Original Message----- >>>>>>> From: Sam Moffatt [mailto:pa...@gm...] >>>>>>> Sent: Thursday, September 29, 2005 8:17 AM >>>>>>> To: Andreas Richter >>>>>>> Cc: mdb...@li... >>>>>>> Subject: Re: [mdb-dev] Problem reading blobs >>>>>>> >>>>>>> what version of the library and as others will say, the cvs version >>>>>>> has heaps of improvements in it. >>>>>>> >>>>>>> On 9/29/05, Andreas Richter <mdb...@ri...> wrote: >>>>>>>> >>>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> I am trying to read an access 2k database with blobs. After reading >>>>>>>> the >>>>>>> blob >>>>>>>> with mdb_read_ole it will give me a 4k of data. Calling >>>>>>>> mdb_read_ole_next >>>>>>>> will always tell me that there is no more data. Is mdb_read_ole >>>>>>>> currently >>>>>>>> implemented for normal blobs? Can I help implementing it? I am using >>>>>>>> the >>>>>>>> library under Mac OS X. >>>>>>>> >>>>>>>> Thanks >>>>>>>> >>>>>>>> Andy >>>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> ------------------------------------------------------- >>>>>> This SF.Net email is sponsored by: >>>>>> Power Architecture Resource Center: Free content, downloads, >>>>>> discussions, >>>>>> and more. http://solutions.newsforge.com/ibmarch.tmpl >>>>>> _______________________________________________ >>>>>> mdbtools-dev mailing list >>>>>> mdb...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >>>>> >>>>> >>>>> >>>>> ------------------------------------------------------- >>>>> This SF.Net email is sponsored by: >>>>> Power Architecture Resource Center: Free content, downloads, discussions, >>>>> and more. http://solutions.newsforge.com/ibmarch.tmpl >>>>> _______________________________________________ >>>>> mdbtools-dev mailing list >>>>> mdb...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >>>>> >>>> >>>> >>>> >>>> ------------------------------------------------------- >>>> This SF.Net email is sponsored by: >>>> Power Architecture Resource Center: Free content, downloads, discussions, >>>> and more. http://solutions.newsforge.com/ibmarch.tmpl >>>> _______________________________________________ >>>> mdbtools-dev mailing list >>>> mdb...@li... >>>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >>>> >> >> >> |
From: Shanon M. <sha...@gm...> - 2005-10-13 22:53:30
|
Andreas, Thanks for that snippet. I did a grep for mdb_read_ole in my source (Well, the whole mdbtools directory), and found nothing. This is with the 0.6pre (from the cvs or csv or whatever). And with this snippet you have given me, I'm assuming this is used before compiling mdbtools, to modify the source code. I'm not really that experienced with tooling around with source code like this (I work with a windows 4GL package). Can you give me a hint of what I'm changing? Thanks. On 10/4/05, Andreas Richter <an...@ri...> wrote: > Here is the promised snippet: > > MdbHandle* mdb =3D mdb_open("<somepath>", MDB_TABLE); > mdb_read_catalog(mdb, MDB_TABLE); > MdbTableDef *table =3D mdb_read_table_by_name(mdb, "<sometablewithblo= b>", > MDB_TABLE); > mdb_read_columns(table); > mdb_read_indices(table); > mdb_rewind_table(table); > > int blobLen =3D MDB_BIND_SIZE; > gchar blobChunk[MDB_BIND_SIZE]; > int colID =3D mdb_bind_column_by_name(table, "<blobcolumn>", blobChun= k, > &blobLen); > MdbColumn * blobCol =3D g_ptr_array_index(table->columns, colID - 1); > if (mdb_fetch_row(table)) > { > gchar binder[MDB_MEMO_OVERHEAD]; > memcpy(binder, blobChunk, MDB_MEMO_OVERHEAD); > int rlen; > if (rlen =3D mdb_ole_read(mdb, blobCol, binder, MDB_BIND_SIZE)) > { > char* mem =3D malloc(rlen); > int pos =3D 0; > memcpy(&mem[pos], blobChunk, rlen); > pos +=3D rlen; > > while (rlen =3D mdb_ole_read_next(mdb, blobCol, binder)) > { > mem =3D realloc(mem, pos + rlen); > memcpy(&mem[pos], blobChunk, rlen); > pos +=3D rlen; > } > // Use the content of mem here. > // The length of the data is located > // in the pos variable at this point. > free(mem); > } > } > > > > > From: Shanon Mulley <sha...@gm...> > > Reply-To: Shanon Mulley <sha...@gm...> > > Date: Tue, 4 Oct 2005 10:28:04 +1000 > > To: Andreas Richter <mdb...@ri...> > > Cc: "Brian A. Seklecki" <lav...@sp...>, Sam Moffatt > > <pa...@gm...>, <mdb...@li...> > > Subject: Re: [mdb-dev] Success - kind of... > > > > So, is this to say that none of the standard mdbtools does this (like > > mdb-export, or mdb-sql) - I have to play around with the source? > > > > I'm looking forward to seeing your snippet of code, although I'm > > thinking it might break my brain getting it to work. I'm a recent > > convert to linux, and so far have avoided playing around with source > > code, but I'm willing to give it a go if need be. > > > > On 10/3/05, Andreas Richter <mdb...@ri...> wrote: > >> I can find some of the code, but don't have access from it at my curre= nt > >> location... > >> Basically mdb_bind_column to a buffer of size MDB_BIND_SIZE > >> Then after fetching the row. Copy this data into a size MDB_MEMO_OVERH= EAD > >> and use the new buffer in the call to mdb_read_ole. This call will the= n copy > >> the data into the original bound buffer and return the size for you. C= opy > >> that data into yet another buffer. Then call mdb_read_ole_next to see = if > >> there is more data. If there is it will again be copied to the bound b= uffer. > >> You can append it to the data you already stored away. Then call > >> mdb_read_ole_next again until it returns 0. I'll paste some code here = when I > >> get home tonight. > >> There is one little snippet of code in the sources that also uses > >> mdb_read_ole. That's where I figured out how to use this. Just grep fo= r > >> mdb_read_ole in all the sources. > >> Thanks, > >> Andy > >> ----- Original Message ----- > >> From: "Shanon Mulley" <sha...@gm...> > >> To: "Brian A. Seklecki" <lav...@sp...> > >> Cc: "Andreas Richter" <mdb...@ri...>; "Sam Moffatt" > >> <pa...@gm...>; <mdb...@li...> > >> Sent: Monday, October 03, 2005 8:23 AM > >> Subject: Re: [mdb-dev] Success - kind of... > >> > >> > >> Andreas, > >> > >> You seem to be having more success with reading binary data from > >> access then myself. I'm wondering if you can give me a few pointers > >> with getting binary data out. I've been teaching myself how to use > >> mdbtools, but am yet to work this out. You mention a command > >> "mdb_read_ole". Can you tell me where this is used? > >> > >> I'm using the latest version (from cvs), but didnt enable odbc - that > >> wouldnt make properly. > >> > >> Thanks. > >> > >> On 10/3/05, Brian A. Seklecki <lav...@sp...> wrote: > >>> Specifically which versions of the autotools did you use, just out of > >>> curiosity? > >>> > >>> ~BAS > >>> > >>> On Fri, 2005-09-30 at 20:59, Andreas Richter wrote: > >>>> Hi, > >>>> After beating autoconf, libtool, autogen and so on into pulp I have = the > >>>> CVS > >>>> version running. It can correctly read the blobs with mdb_read_ole!!= ! > >>>> Yes. > >>>> However blobs that are longer than will cause an segment fault in > >>>> mdb_read_ole_next. The problem seems to be that the length calculati= on > >>>> is > >>>> wrong. The following code in mdb_find_pg_row comes back with a len o= f > >>>> 0xfffffb9b i.e. A negative number. > >>>> > >>>> mdb_swap_pgbuf(mdb); > >>>> mdb_find_row(mdb, row, off, len); > >>>> mdb_swap_pgbuf(mdb); > >>>> > >>>> In mdb_find_row the next_start is 4096 even though I am not loading = the > >>>> blob > >>>> from the first row. By testing thing I think I figured out that > >>>> sometimes > >>>> blobs come with extensions, but the pg_row for the extension (i.e. T= he > >>>> next > >>>> pointer) is zero. Therefore making the following change in > >>>> mdb_read_ole_next > >>>> seemed to fix the crash (line 460) > >>>> > >>>> if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, > >>>> &buf, &row_start, &len)) { > >>>> return 0; > >>>> } > >>>> > >>>> To > >>>> > >>>> if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, > >>>> col->cur_blob_pg_row, > >>>> &buf, &row_start, &len)) { > >>>> return 0; > >>>> } > >>>> > >>>> BTW: The CVS version of mdbtools.h has an #include <config.h> which > >>>> probably > >>>> won't work from a normal include directory. > >>>> Thanks, > >>>> Andy > >>>> > >>>>> From: Andreas Richter <mdb...@ri...> > >>>>> Date: Fri, 30 Sep 2005 11:24:31 -0400 > >>>>> To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' > >>>>> <mdb...@ri...> > >>>>> Cc: <mdb...@li...> > >>>>> Subject: RE: [mdb-dev] Problem reading blobs > >>>>> > >>>>> No success building CVS dump under Darwin. The 0.6pre tar file did > >>>>> work > >>>>> correctly under a fink installation, but the autogen.sh file for th= e > >>>>> CVS > >>>>> dump did not work. Has anyone done this yet? I'll probably just > >>>>> execute the > >>>>> autogen.sh under linux and then move the files to Darwin. Any > >>>>> suggestions > >>>>> would be great. > >>>>> Thanks, > >>>>> Andy > >>>>> > >>>>> -----Original Message----- > >>>>> From: Sam Moffatt [mailto:pa...@gm...] > >>>>> Sent: Thursday, September 29, 2005 8:17 AM > >>>>> To: Andreas Richter > >>>>> Cc: mdb...@li... > >>>>> Subject: Re: [mdb-dev] Problem reading blobs > >>>>> > >>>>> what version of the library and as others will say, the cvs version > >>>>> has heaps of improvements in it. > >>>>> > >>>>> On 9/29/05, Andreas Richter <mdb...@ri...> wrote: > >>>>>> > >>>>>> > >>>>>> Hi, > >>>>>> > >>>>>> I am trying to read an access 2k database with blobs. After readin= g > >>>>>> the > >>>>> blob > >>>>>> with mdb_read_ole it will give me a 4k of data. Calling > >>>>>> mdb_read_ole_next > >>>>>> will always tell me that there is no more data. Is mdb_read_ole > >>>>>> currently > >>>>>> implemented for normal blobs? Can I help implementing it? I am usi= ng > >>>>>> the > >>>>>> library under Mac OS X. > >>>>>> > >>>>>> Thanks > >>>>>> > >>>>>> Andy > >>>>>> > >>>>> > >>>> > >>>> > >>>> > >>>> > >>>> ------------------------------------------------------- > >>>> This SF.Net email is sponsored by: > >>>> Power Architecture Resource Center: Free content, downloads, > >>>> discussions, > >>>> and more. http://solutions.newsforge.com/ibmarch.tmpl > >>>> _______________________________________________ > >>>> mdbtools-dev mailing list > >>>> mdb...@li... > >>>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > >>> > >>> > >>> > >>> ------------------------------------------------------- > >>> This SF.Net email is sponsored by: > >>> Power Architecture Resource Center: Free content, downloads, discussi= ons, > >>> and more. http://solutions.newsforge.com/ibmarch.tmpl > >>> _______________________________________________ > >>> mdbtools-dev mailing list > >>> mdb...@li... > >>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > >>> > >> > >> > >> > >> ------------------------------------------------------- > >> This SF.Net email is sponsored by: > >> Power Architecture Resource Center: Free content, downloads, discussio= ns, > >> and more. http://solutions.newsforge.com/ibmarch.tmpl > >> _______________________________________________ > >> mdbtools-dev mailing list > >> mdb...@li... > >> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > >> > > > |
From: Alfred A. <al...@an...> - 2005-10-10 08:02:18
|
The text sizes in different versions of Access are different (8-bit characters vs. 16-bit characters) and MDB reports the *actual*sizes okay. How to interpret them when creating the database description is a different thing. I solved this problem by dividing it by two if Jet version is 4 here is a code snippet: /* * col_size contains column size; * if it comes from JET4 database, divide it by two * otherwise kep it as is, for it might be okay :-? */ if (IS_JET4(mdb)) col_size /= 2; >Date: Sun, 09 Oct 2005 15:22:05 -0600 >To: mdb...@li... >From: "Michael D. Petersen" <ra...@dr...> >Subject: [mdb-dev] mdb-schema field size > >I am using mdb-schema to export an Access 2000 database. All is working >well expect the field sizes of the text fields are all twice as large as >they should be in the mdb-schema output. For example, I have two text >fields, one length 5 and the other 255 as reported by Access. However, the >schema output (mysql backend) is: > >DROP TABLE Table1; >CREATE TABLE Table1 > ( > TextField5 varchar (10), > TextField255 varchar (510) >); > >The content of the test file is (mdb-export): >TextField5,TextField255 >"abcde","LotsOfData" > >Is this a bug? A character set issue? Is there a work-around already >designed into mdbtools? The problem is that MySQL doesn't like the varchar >fields being larger than 255 (it wants a BLOB instead) when importing the >schema file. Also, it allocates twice the size than necessary. When >viewing the .mdb file with a hex editor all the field contents are contiguous: > > 1afe0 00 00 02 00 ff fe 61 62 63 64 65 ff fe 4c 6f 74 ......abcde..Lot > 1aff0 73 4f 66 44 61 74 61 15 00 09 00 02 00 02 00 03 sOfData......... > >Therefore, there doesn't seem to be a reason why the size should come up >double what Access says it is, unless I'm misunderstanding something here >(entirely possible...). > >I have recreated this problem using a very simple Access database generated >from scratch (above). I have also tried using mdbtools compiled on >Cygwin(Win2k) and Linux, both from the CVS sources, with the same >results. The field sizes are also independent of the backend selected in >mdb-schema. > >Thanks for the help, >Michael > > |
From: Michael D. P. <ra...@dr...> - 2005-10-09 21:22:08
|
I am using mdb-schema to export an Access 2000 database. All is working well expect the field sizes of the text fields are all twice as large as they should be in the mdb-schema output. For example, I have two text fields, one length 5 and the other 255 as reported by Access. However, the schema output (mysql backend) is: DROP TABLE Table1; CREATE TABLE Table1 ( TextField5 varchar (10), TextField255 varchar (510) ); The content of the test file is (mdb-export): TextField5,TextField255 "abcde","LotsOfData" Is this a bug? A character set issue? Is there a work-around already designed into mdbtools? The problem is that MySQL doesn't like the varchar fields being larger than 255 (it wants a BLOB instead) when importing the schema file. Also, it allocates twice the size than necessary. When viewing the .mdb file with a hex editor all the field contents are contiguous: 1afe0 00 00 02 00 ff fe 61 62 63 64 65 ff fe 4c 6f 74 ......abcde..Lot 1aff0 73 4f 66 44 61 74 61 15 00 09 00 02 00 02 00 03 sOfData......... Therefore, there doesn't seem to be a reason why the size should come up double what Access says it is, unless I'm misunderstanding something here (entirely possible...). I have recreated this problem using a very simple Access database generated from scratch (above). I have also tried using mdbtools compiled on Cygwin(Win2k) and Linux, both from the CVS sources, with the same results. The field sizes are also independent of the backend selected in mdb-schema. Thanks for the help, Michael |
From: Yasir A. <li...@en...> - 2005-10-08 06:47:07
|
Many thanks to the team for developing this code - I've found it very useful. I'm using the latest CVS version on Linux I think I found a bug in data.c which causes mdb-export -I to sometimes print NULL for boolean columns (even when the value is not null). Here's the CVS diff: $ cvs diff data.c Index: data.c =================================================================== RCS file: /cvsroot/mdbtools/mdbtools/src/libmdb/data.c,v retrieving revision 1.102 diff -r1.102 data.c 173a174,176 > if (col->len_ptr) { > *col->len_ptr = 1; > } I hardly know the code at all, so my fix could be completely wrong, but it looks as though we're not updating *col->len_ptr for for boolean fields and this causes the -I switch of mdb-export (which relies on col->len_ptr) to think its value is 0. In fact I'm not sure of the value is ever set to 0 to begin with so whether it prints NULL or not depends on what's in the block of memory when bound_lens is allocated (see mdb-export.c). Just in case this isn't clear, the change is in data.c, function mdb_xfer_bound_bool(), with the new code inserted at line 174. Thanks, Yasir |
From: Pedro A. A. <pa...@ti...> - 2005-10-07 05:42:50
|
Hi folks, in the last CVS of the mdb-tools, mdb-sql dumped a core because of a simple error. Attached is the patch that solves the problem. I also included a small addition, which allows me to call mdb-sql from scripts in a quite interactive way. I rely on this feature for a lot of scripts which I don't want to port to Java or any other language. They work and that's (more than) enough for me. Regards, Pedro A. Aranda |
From: Andreas R. <mdb...@ri...> - 2005-10-04 11:59:57
|
Here are the versions of tools I was using to compile under Mac OS X 10.3 with fink installed. I used Fink to supply glib. automake (GNU automake) 1.7.6 autoconf (GNU Autoconf) 2.59 ltmain.sh (GNU libtool) 1.3.5 (1.385.2.206 2000/05/27 11:12:27) (I had to change all occurances of libtool to glibtool to use the GNU versions) I was really thrashing to get this to work. I am sorry that I can't recall every little thing I did.... Thanks, Andy > From: "Brian A. Seklecki" <lav...@sp...> > Date: Sun, 02 Oct 2005 19:24:40 -0400 > To: Andreas Richter <mdb...@ri...> > Cc: 'Sam Moffatt' <pa...@gm...>, <mdb...@li...> > Subject: Re: [mdb-dev] Success - kind of... > > Specifically which versions of the autotools did you use, just out of > curiosity? > > ~BAS > > On Fri, 2005-09-30 at 20:59, Andreas Richter wrote: >> Hi, >> After beating autoconf, libtool, autogen and so on into pulp I have the CVS >> version running. It can correctly read the blobs with mdb_read_ole!!! Yes. >> However blobs that are longer than will cause an segment fault in >> mdb_read_ole_next. The problem seems to be that the length calculation is >> wrong. The following code in mdb_find_pg_row comes back with a len of >> 0xfffffb9b i.e. A negative number. >> >> mdb_swap_pgbuf(mdb); >> mdb_find_row(mdb, row, off, len); >> mdb_swap_pgbuf(mdb); >> >> In mdb_find_row the next_start is 4096 even though I am not loading the blob >> from the first row. By testing thing I think I figured out that sometimes >> blobs come with extensions, but the pg_row for the extension (i.e. The next >> pointer) is zero. Therefore making the following change in mdb_read_ole_next >> seemed to fix the crash (line 460) >> >> if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, >> &buf, &row_start, &len)) { >> return 0; >> } >> >> To >> >> if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, >> col->cur_blob_pg_row, >> &buf, &row_start, &len)) { >> return 0; >> } >> >> BTW: The CVS version of mdbtools.h has an #include <config.h> which probably >> won't work from a normal include directory. >> Thanks, >> Andy >> >>> From: Andreas Richter <mdb...@ri...> >>> Date: Fri, 30 Sep 2005 11:24:31 -0400 >>> To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' >>> <mdb...@ri...> >>> Cc: <mdb...@li...> >>> Subject: RE: [mdb-dev] Problem reading blobs >>> >>> No success building CVS dump under Darwin. The 0.6pre tar file did work >>> correctly under a fink installation, but the autogen.sh file for the CVS >>> dump did not work. Has anyone done this yet? I'll probably just execute the >>> autogen.sh under linux and then move the files to Darwin. Any suggestions >>> would be great. >>> Thanks, >>> Andy >>> >>> -----Original Message----- >>> From: Sam Moffatt [mailto:pa...@gm...] >>> Sent: Thursday, September 29, 2005 8:17 AM >>> To: Andreas Richter >>> Cc: mdb...@li... >>> Subject: Re: [mdb-dev] Problem reading blobs >>> >>> what version of the library and as others will say, the cvs version >>> has heaps of improvements in it. >>> >>> On 9/29/05, Andreas Richter <mdb...@ri...> wrote: >>>> >>>> >>>> Hi, >>>> >>>> I am trying to read an access 2k database with blobs. After reading the >>> blob >>>> with mdb_read_ole it will give me a 4k of data. Calling mdb_read_ole_next >>>> will always tell me that there is no more data. Is mdb_read_ole currently >>>> implemented for normal blobs? Can I help implementing it? I am using the >>>> library under Mac OS X. >>>> >>>> Thanks >>>> >>>> Andy >>>> >>> >> >> >> >> >> ------------------------------------------------------- >> This SF.Net email is sponsored by: >> Power Architecture Resource Center: Free content, downloads, discussions, >> and more. http://solutions.newsforge.com/ibmarch.tmpl >> _______________________________________________ >> mdbtools-dev mailing list >> mdb...@li... >> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > ------ End of Forwarded Message |
From: SourceForge.net <no...@so...> - 2005-10-04 09:48:59
|
Bugs item #1312728, was opened at 2005-10-04 11:48 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=102294&aid=1312728&group_id=2294 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: build tools Group: None Status: Open Resolution: None Priority: 5 Submitted By: Benjamin Riefenstahl (cc_benny) Assigned to: Nobody/Anonymous (nobody) Summary: Configure and ltmain.sh disagree on shrext Initial Comment: Configure uses the variable "shrext_cmds" to set the shared library extension. Ltmain.sh looks for "shrext" instead. The result is that the libs do not have an extension at all and ldconfig doesn't pick them up. Gmdb2 still works with an explict LD_LIBRARY_PATH, bypassing ldconfig. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=102294&aid=1312728&group_id=2294 |
From: SourceForge.net <no...@so...> - 2005-10-04 09:44:53
|
Bugs item #1312726, was opened at 2005-10-04 11:44 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=102294&aid=1312726&group_id=2294 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: GUI Group: None Status: Open Resolution: None Priority: 5 Submitted By: Benjamin Riefenstahl (cc_benny) Assigned to: Nobody/Anonymous (nobody) Summary: Segefault when showing table definition Initial Comment: I get segfaults in the heap manager (calloc/malloc) when I try to display some table definitions (context menu->"Definition"). AFAICS this points to a memory corruption error. Example stacktrace: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 1088713824 (LWP 28675)] 0x40c9bb9f in calloc () from /lib/tls/libc.so.6 (gdb) bt #0 0x40c9bb9f in calloc () from /lib/tls/libc.so.6 #1 0x40be2caf in g_malloc0 () from /opt/gnome/lib/libglib-2.0.so.0 #2 0x40b6575b in g_type_create_instance () from /opt/gnome/lib/libgobject-2.0.so.0 #3 0x40b4c908 in g_object_constructor () from /opt/gnome/lib/libgobject-2.0.so.0 #4 0x40b4af8c in g_object_newv () from /opt/gnome/lib/libgobject-2.0.so.0 #5 0x40b4b548 in g_object_new_valist () from /opt/gnome/lib/libgobject-2.0.so.0 #6 0x40b4b6da in g_object_new () from /opt/gnome/lib/libgobject-2.0.so.0 #7 0x407885a4 in pango_layout_new () from /opt/gnome/lib/libpango-1.0.so.0 #8 0x406099d3 in gtk_widget_create_pango_layout () from /opt/gnome/lib/libgtk-x11-2.0.so.0 #9 0x40461ae6 in _gtk_clist_create_cell_layout () from /opt/gnome/lib/libgtk-x11-2.0.so.0 #10 0x4046e168 in draw_row () from /opt/gnome/lib/libgtk-x11-2.0.so.0 #11 0x40464c79 in gtk_clist_set_pixmap () from /opt/gnome/lib/libgtk-x11-2.0.so.0 #12 0x0804f3c3 in gmdb_table_def_new (entry=0x8205c38) at table_def.c:137 #13 0x40b53ad1 in g_cclosure_marshal_VOID__VOID () from /opt/gnome/lib/libgobject-2.0.so.0 #14 0x40b44bfb in g_closure_invoke () from /opt/gnome/lib/libgobject-2.0.so.0 #15 0x40b55fb0 in signal_emit_unlocked_R () from /opt/gnome/lib/libgobject-2.0.so.0 #16 0x40b5768a in g_signal_emit_valist () from /opt/gnome/lib/libgobject-2.0.so.0 #17 0x40b579b2 in g_signal_emit () from /opt/gnome/lib/libgobject-2.0.so.0 #18 0x4060ac76 in gtk_widget_activate () from /opt/gnome/lib/libgtk-x11-2.0.so.0 #19 0x40519c32 in gtk_menu_shell_activate_item () from /opt/gnome/lib/libgtk-x11-2.0.so.0 #20 0x4051b279 in gtk_menu_shell_button_release () from /opt/gnome/lib/libgtk-x11-2.0.so.0 #21 0x4050f78d in gtk_menu_button_release () from /opt/gnome/lib/libgtk-x11-2.0.so.0 #22 0x4050dfa4 in _gtk_marshal_BOOLEAN__BOXED () from /opt/gnome/lib/libgtk-x11-2.0.so.0 #23 0x40b448e7 in g_type_class_meta_marshal () from /opt/gnome/lib/libgobject-2.0.so.0 #24 0x40b44bfb in g_closure_invoke () from /opt/gnome/lib/libgobject-2.0.so.0 #25 0x40b563f6 in signal_emit_unlocked_R () from /opt/gnome/lib/libgobject-2.0.so.0 #26 0x40b573f6 in g_signal_emit_valist () from /opt/gnome/lib/libgobject-2.0.so.0 #27 0x40b579b2 in g_signal_emit () from /opt/gnome/lib/libgobject-2.0.so.0 #28 0x40605d64 in gtk_widget_event_internal () from /opt/gnome/lib/libgtk-x11-2.0.so.0 #29 0x40506469 in gtk_propagate_event () from /opt/gnome/lib/libgtk-x11-2.0.so.0 #30 0x40507991 in gtk_main_do_event () from /opt/gnome/lib/libgtk-x11-2.0.so.0 #31 0x406fbc12 in gdk_event_dispatch () from /opt/gnome/lib/libgdk-x11-2.0.so.0 #32 0x40bdbd17 in g_main_context_dispatch () from /opt/gnome/lib/libglib-2.0.so.0 #33 0x40bde467 in g_main_context_iterate () from /opt/gnome/lib/libglib-2.0.so.0 #34 0x40bdf677 in g_main_loop_run () from /opt/gnome/lib/libglib-2.0.so.0 #35 0x40507e43 in gtk_main () from /opt/gnome/lib/libgtk-x11-2.0.so.0 #36 0x0804dc29 in main (argc=1, argv=0xbfffee04) at main2.c:270 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=102294&aid=1312726&group_id=2294 |
From: Andreas R. <mdb...@ri...> - 2005-10-04 01:30:35
|
Here is the promised snippet: MdbHandle* mdb = mdb_open("<somepath>", MDB_TABLE); mdb_read_catalog(mdb, MDB_TABLE); MdbTableDef *table = mdb_read_table_by_name(mdb, "<sometablewithblob>", MDB_TABLE); mdb_read_columns(table); mdb_read_indices(table); mdb_rewind_table(table); int blobLen = MDB_BIND_SIZE; gchar blobChunk[MDB_BIND_SIZE]; int colID = mdb_bind_column_by_name(table, "<blobcolumn>", blobChunk, &blobLen); MdbColumn * blobCol = g_ptr_array_index(table->columns, colID - 1); if (mdb_fetch_row(table)) { gchar binder[MDB_MEMO_OVERHEAD]; memcpy(binder, blobChunk, MDB_MEMO_OVERHEAD); int rlen; if (rlen = mdb_ole_read(mdb, blobCol, binder, MDB_BIND_SIZE)) { char* mem = malloc(rlen); int pos = 0; memcpy(&mem[pos], blobChunk, rlen); pos += rlen; while (rlen = mdb_ole_read_next(mdb, blobCol, binder)) { mem = realloc(mem, pos + rlen); memcpy(&mem[pos], blobChunk, rlen); pos += rlen; } // Use the content of mem here. // The length of the data is located // in the pos variable at this point. free(mem); } } > From: Shanon Mulley <sha...@gm...> > Reply-To: Shanon Mulley <sha...@gm...> > Date: Tue, 4 Oct 2005 10:28:04 +1000 > To: Andreas Richter <mdb...@ri...> > Cc: "Brian A. Seklecki" <lav...@sp...>, Sam Moffatt > <pa...@gm...>, <mdb...@li...> > Subject: Re: [mdb-dev] Success - kind of... > > So, is this to say that none of the standard mdbtools does this (like > mdb-export, or mdb-sql) - I have to play around with the source? > > I'm looking forward to seeing your snippet of code, although I'm > thinking it might break my brain getting it to work. I'm a recent > convert to linux, and so far have avoided playing around with source > code, but I'm willing to give it a go if need be. > > On 10/3/05, Andreas Richter <mdb...@ri...> wrote: >> I can find some of the code, but don't have access from it at my current >> location... >> Basically mdb_bind_column to a buffer of size MDB_BIND_SIZE >> Then after fetching the row. Copy this data into a size MDB_MEMO_OVERHEAD >> and use the new buffer in the call to mdb_read_ole. This call will then copy >> the data into the original bound buffer and return the size for you. Copy >> that data into yet another buffer. Then call mdb_read_ole_next to see if >> there is more data. If there is it will again be copied to the bound buffer. >> You can append it to the data you already stored away. Then call >> mdb_read_ole_next again until it returns 0. I'll paste some code here when I >> get home tonight. >> There is one little snippet of code in the sources that also uses >> mdb_read_ole. That's where I figured out how to use this. Just grep for >> mdb_read_ole in all the sources. >> Thanks, >> Andy >> ----- Original Message ----- >> From: "Shanon Mulley" <sha...@gm...> >> To: "Brian A. Seklecki" <lav...@sp...> >> Cc: "Andreas Richter" <mdb...@ri...>; "Sam Moffatt" >> <pa...@gm...>; <mdb...@li...> >> Sent: Monday, October 03, 2005 8:23 AM >> Subject: Re: [mdb-dev] Success - kind of... >> >> >> Andreas, >> >> You seem to be having more success with reading binary data from >> access then myself. I'm wondering if you can give me a few pointers >> with getting binary data out. I've been teaching myself how to use >> mdbtools, but am yet to work this out. You mention a command >> "mdb_read_ole". Can you tell me where this is used? >> >> I'm using the latest version (from cvs), but didnt enable odbc - that >> wouldnt make properly. >> >> Thanks. >> >> On 10/3/05, Brian A. Seklecki <lav...@sp...> wrote: >>> Specifically which versions of the autotools did you use, just out of >>> curiosity? >>> >>> ~BAS >>> >>> On Fri, 2005-09-30 at 20:59, Andreas Richter wrote: >>>> Hi, >>>> After beating autoconf, libtool, autogen and so on into pulp I have the >>>> CVS >>>> version running. It can correctly read the blobs with mdb_read_ole!!! >>>> Yes. >>>> However blobs that are longer than will cause an segment fault in >>>> mdb_read_ole_next. The problem seems to be that the length calculation >>>> is >>>> wrong. The following code in mdb_find_pg_row comes back with a len of >>>> 0xfffffb9b i.e. A negative number. >>>> >>>> mdb_swap_pgbuf(mdb); >>>> mdb_find_row(mdb, row, off, len); >>>> mdb_swap_pgbuf(mdb); >>>> >>>> In mdb_find_row the next_start is 4096 even though I am not loading the >>>> blob >>>> from the first row. By testing thing I think I figured out that >>>> sometimes >>>> blobs come with extensions, but the pg_row for the extension (i.e. The >>>> next >>>> pointer) is zero. Therefore making the following change in >>>> mdb_read_ole_next >>>> seemed to fix the crash (line 460) >>>> >>>> if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, >>>> &buf, &row_start, &len)) { >>>> return 0; >>>> } >>>> >>>> To >>>> >>>> if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, >>>> col->cur_blob_pg_row, >>>> &buf, &row_start, &len)) { >>>> return 0; >>>> } >>>> >>>> BTW: The CVS version of mdbtools.h has an #include <config.h> which >>>> probably >>>> won't work from a normal include directory. >>>> Thanks, >>>> Andy >>>> >>>>> From: Andreas Richter <mdb...@ri...> >>>>> Date: Fri, 30 Sep 2005 11:24:31 -0400 >>>>> To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' >>>>> <mdb...@ri...> >>>>> Cc: <mdb...@li...> >>>>> Subject: RE: [mdb-dev] Problem reading blobs >>>>> >>>>> No success building CVS dump under Darwin. The 0.6pre tar file did >>>>> work >>>>> correctly under a fink installation, but the autogen.sh file for the >>>>> CVS >>>>> dump did not work. Has anyone done this yet? I'll probably just >>>>> execute the >>>>> autogen.sh under linux and then move the files to Darwin. Any >>>>> suggestions >>>>> would be great. >>>>> Thanks, >>>>> Andy >>>>> >>>>> -----Original Message----- >>>>> From: Sam Moffatt [mailto:pa...@gm...] >>>>> Sent: Thursday, September 29, 2005 8:17 AM >>>>> To: Andreas Richter >>>>> Cc: mdb...@li... >>>>> Subject: Re: [mdb-dev] Problem reading blobs >>>>> >>>>> what version of the library and as others will say, the cvs version >>>>> has heaps of improvements in it. >>>>> >>>>> On 9/29/05, Andreas Richter <mdb...@ri...> wrote: >>>>>> >>>>>> >>>>>> Hi, >>>>>> >>>>>> I am trying to read an access 2k database with blobs. After reading >>>>>> the >>>>> blob >>>>>> with mdb_read_ole it will give me a 4k of data. Calling >>>>>> mdb_read_ole_next >>>>>> will always tell me that there is no more data. Is mdb_read_ole >>>>>> currently >>>>>> implemented for normal blobs? Can I help implementing it? I am using >>>>>> the >>>>>> library under Mac OS X. >>>>>> >>>>>> Thanks >>>>>> >>>>>> Andy >>>>>> >>>>> >>>> >>>> >>>> >>>> >>>> ------------------------------------------------------- >>>> This SF.Net email is sponsored by: >>>> Power Architecture Resource Center: Free content, downloads, >>>> discussions, >>>> and more. http://solutions.newsforge.com/ibmarch.tmpl >>>> _______________________________________________ >>>> mdbtools-dev mailing list >>>> mdb...@li... >>>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >>> >>> >>> >>> ------------------------------------------------------- >>> This SF.Net email is sponsored by: >>> Power Architecture Resource Center: Free content, downloads, discussions, >>> and more. http://solutions.newsforge.com/ibmarch.tmpl >>> _______________________________________________ >>> mdbtools-dev mailing list >>> mdb...@li... >>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >>> >> >> >> >> ------------------------------------------------------- >> This SF.Net email is sponsored by: >> Power Architecture Resource Center: Free content, downloads, discussions, >> and more. http://solutions.newsforge.com/ibmarch.tmpl >> _______________________________________________ >> mdbtools-dev mailing list >> mdb...@li... >> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >> |
From: Andreas R. <an...@ri...> - 2005-10-04 01:23:23
|
Here is the promised snippet: MdbHandle* mdb = mdb_open("<somepath>", MDB_TABLE); mdb_read_catalog(mdb, MDB_TABLE); MdbTableDef *table = mdb_read_table_by_name(mdb, "<sometablewithblob>", MDB_TABLE); mdb_read_columns(table); mdb_read_indices(table); mdb_rewind_table(table); int blobLen = MDB_BIND_SIZE; gchar blobChunk[MDB_BIND_SIZE]; int colID = mdb_bind_column_by_name(table, "<blobcolumn>", blobChunk, &blobLen); MdbColumn * blobCol = g_ptr_array_index(table->columns, colID - 1); if (mdb_fetch_row(table)) { gchar binder[MDB_MEMO_OVERHEAD]; memcpy(binder, blobChunk, MDB_MEMO_OVERHEAD); int rlen; if (rlen = mdb_ole_read(mdb, blobCol, binder, MDB_BIND_SIZE)) { char* mem = malloc(rlen); int pos = 0; memcpy(&mem[pos], blobChunk, rlen); pos += rlen; while (rlen = mdb_ole_read_next(mdb, blobCol, binder)) { mem = realloc(mem, pos + rlen); memcpy(&mem[pos], blobChunk, rlen); pos += rlen; } // Use the content of mem here. // The length of the data is located // in the pos variable at this point. free(mem); } } > From: Shanon Mulley <sha...@gm...> > Reply-To: Shanon Mulley <sha...@gm...> > Date: Tue, 4 Oct 2005 10:28:04 +1000 > To: Andreas Richter <mdb...@ri...> > Cc: "Brian A. Seklecki" <lav...@sp...>, Sam Moffatt > <pa...@gm...>, <mdb...@li...> > Subject: Re: [mdb-dev] Success - kind of... > > So, is this to say that none of the standard mdbtools does this (like > mdb-export, or mdb-sql) - I have to play around with the source? > > I'm looking forward to seeing your snippet of code, although I'm > thinking it might break my brain getting it to work. I'm a recent > convert to linux, and so far have avoided playing around with source > code, but I'm willing to give it a go if need be. > > On 10/3/05, Andreas Richter <mdb...@ri...> wrote: >> I can find some of the code, but don't have access from it at my current >> location... >> Basically mdb_bind_column to a buffer of size MDB_BIND_SIZE >> Then after fetching the row. Copy this data into a size MDB_MEMO_OVERHEAD >> and use the new buffer in the call to mdb_read_ole. This call will then copy >> the data into the original bound buffer and return the size for you. Copy >> that data into yet another buffer. Then call mdb_read_ole_next to see if >> there is more data. If there is it will again be copied to the bound buffer. >> You can append it to the data you already stored away. Then call >> mdb_read_ole_next again until it returns 0. I'll paste some code here when I >> get home tonight. >> There is one little snippet of code in the sources that also uses >> mdb_read_ole. That's where I figured out how to use this. Just grep for >> mdb_read_ole in all the sources. >> Thanks, >> Andy >> ----- Original Message ----- >> From: "Shanon Mulley" <sha...@gm...> >> To: "Brian A. Seklecki" <lav...@sp...> >> Cc: "Andreas Richter" <mdb...@ri...>; "Sam Moffatt" >> <pa...@gm...>; <mdb...@li...> >> Sent: Monday, October 03, 2005 8:23 AM >> Subject: Re: [mdb-dev] Success - kind of... >> >> >> Andreas, >> >> You seem to be having more success with reading binary data from >> access then myself. I'm wondering if you can give me a few pointers >> with getting binary data out. I've been teaching myself how to use >> mdbtools, but am yet to work this out. You mention a command >> "mdb_read_ole". Can you tell me where this is used? >> >> I'm using the latest version (from cvs), but didnt enable odbc - that >> wouldnt make properly. >> >> Thanks. >> >> On 10/3/05, Brian A. Seklecki <lav...@sp...> wrote: >>> Specifically which versions of the autotools did you use, just out of >>> curiosity? >>> >>> ~BAS >>> >>> On Fri, 2005-09-30 at 20:59, Andreas Richter wrote: >>>> Hi, >>>> After beating autoconf, libtool, autogen and so on into pulp I have the >>>> CVS >>>> version running. It can correctly read the blobs with mdb_read_ole!!! >>>> Yes. >>>> However blobs that are longer than will cause an segment fault in >>>> mdb_read_ole_next. The problem seems to be that the length calculation >>>> is >>>> wrong. The following code in mdb_find_pg_row comes back with a len of >>>> 0xfffffb9b i.e. A negative number. >>>> >>>> mdb_swap_pgbuf(mdb); >>>> mdb_find_row(mdb, row, off, len); >>>> mdb_swap_pgbuf(mdb); >>>> >>>> In mdb_find_row the next_start is 4096 even though I am not loading the >>>> blob >>>> from the first row. By testing thing I think I figured out that >>>> sometimes >>>> blobs come with extensions, but the pg_row for the extension (i.e. The >>>> next >>>> pointer) is zero. Therefore making the following change in >>>> mdb_read_ole_next >>>> seemed to fix the crash (line 460) >>>> >>>> if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, >>>> &buf, &row_start, &len)) { >>>> return 0; >>>> } >>>> >>>> To >>>> >>>> if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, >>>> col->cur_blob_pg_row, >>>> &buf, &row_start, &len)) { >>>> return 0; >>>> } >>>> >>>> BTW: The CVS version of mdbtools.h has an #include <config.h> which >>>> probably >>>> won't work from a normal include directory. >>>> Thanks, >>>> Andy >>>> >>>>> From: Andreas Richter <mdb...@ri...> >>>>> Date: Fri, 30 Sep 2005 11:24:31 -0400 >>>>> To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' >>>>> <mdb...@ri...> >>>>> Cc: <mdb...@li...> >>>>> Subject: RE: [mdb-dev] Problem reading blobs >>>>> >>>>> No success building CVS dump under Darwin. The 0.6pre tar file did >>>>> work >>>>> correctly under a fink installation, but the autogen.sh file for the >>>>> CVS >>>>> dump did not work. Has anyone done this yet? I'll probably just >>>>> execute the >>>>> autogen.sh under linux and then move the files to Darwin. Any >>>>> suggestions >>>>> would be great. >>>>> Thanks, >>>>> Andy >>>>> >>>>> -----Original Message----- >>>>> From: Sam Moffatt [mailto:pa...@gm...] >>>>> Sent: Thursday, September 29, 2005 8:17 AM >>>>> To: Andreas Richter >>>>> Cc: mdb...@li... >>>>> Subject: Re: [mdb-dev] Problem reading blobs >>>>> >>>>> what version of the library and as others will say, the cvs version >>>>> has heaps of improvements in it. >>>>> >>>>> On 9/29/05, Andreas Richter <mdb...@ri...> wrote: >>>>>> >>>>>> >>>>>> Hi, >>>>>> >>>>>> I am trying to read an access 2k database with blobs. After reading >>>>>> the >>>>> blob >>>>>> with mdb_read_ole it will give me a 4k of data. Calling >>>>>> mdb_read_ole_next >>>>>> will always tell me that there is no more data. Is mdb_read_ole >>>>>> currently >>>>>> implemented for normal blobs? Can I help implementing it? I am using >>>>>> the >>>>>> library under Mac OS X. >>>>>> >>>>>> Thanks >>>>>> >>>>>> Andy >>>>>> >>>>> >>>> >>>> >>>> >>>> >>>> ------------------------------------------------------- >>>> This SF.Net email is sponsored by: >>>> Power Architecture Resource Center: Free content, downloads, >>>> discussions, >>>> and more. http://solutions.newsforge.com/ibmarch.tmpl >>>> _______________________________________________ >>>> mdbtools-dev mailing list >>>> mdb...@li... >>>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >>> >>> >>> >>> ------------------------------------------------------- >>> This SF.Net email is sponsored by: >>> Power Architecture Resource Center: Free content, downloads, discussions, >>> and more. http://solutions.newsforge.com/ibmarch.tmpl >>> _______________________________________________ >>> mdbtools-dev mailing list >>> mdb...@li... >>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >>> >> >> >> >> ------------------------------------------------------- >> This SF.Net email is sponsored by: >> Power Architecture Resource Center: Free content, downloads, discussions, >> and more. http://solutions.newsforge.com/ibmarch.tmpl >> _______________________________________________ >> mdbtools-dev mailing list >> mdb...@li... >> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >> |
From: Shanon M. <sha...@gm...> - 2005-10-04 00:28:19
|
So, is this to say that none of the standard mdbtools does this (like mdb-export, or mdb-sql) - I have to play around with the source? I'm looking forward to seeing your snippet of code, although I'm thinking it might break my brain getting it to work. I'm a recent convert to linux, and so far have avoided playing around with source code, but I'm willing to give it a go if need be. On 10/3/05, Andreas Richter <mdb...@ri...> wrote: > I can find some of the code, but don't have access from it at my current > location... > Basically mdb_bind_column to a buffer of size MDB_BIND_SIZE > Then after fetching the row. Copy this data into a size MDB_MEMO_OVERHEAD > and use the new buffer in the call to mdb_read_ole. This call will then c= opy > the data into the original bound buffer and return the size for you. Copy > that data into yet another buffer. Then call mdb_read_ole_next to see if > there is more data. If there is it will again be copied to the bound buff= er. > You can append it to the data you already stored away. Then call > mdb_read_ole_next again until it returns 0. I'll paste some code here whe= n I > get home tonight. > There is one little snippet of code in the sources that also uses > mdb_read_ole. That's where I figured out how to use this. Just grep for > mdb_read_ole in all the sources. > Thanks, > Andy > ----- Original Message ----- > From: "Shanon Mulley" <sha...@gm...> > To: "Brian A. Seklecki" <lav...@sp...> > Cc: "Andreas Richter" <mdb...@ri...>; "Sam Moffatt" > <pa...@gm...>; <mdb...@li...> > Sent: Monday, October 03, 2005 8:23 AM > Subject: Re: [mdb-dev] Success - kind of... > > > Andreas, > > You seem to be having more success with reading binary data from > access then myself. I'm wondering if you can give me a few pointers > with getting binary data out. I've been teaching myself how to use > mdbtools, but am yet to work this out. You mention a command > "mdb_read_ole". Can you tell me where this is used? > > I'm using the latest version (from cvs), but didnt enable odbc - that > wouldnt make properly. > > Thanks. > > On 10/3/05, Brian A. Seklecki <lav...@sp...> wrote: > > Specifically which versions of the autotools did you use, just out of > > curiosity? > > > > ~BAS > > > > On Fri, 2005-09-30 at 20:59, Andreas Richter wrote: > > > Hi, > > > After beating autoconf, libtool, autogen and so on into pulp I have t= he > > > CVS > > > version running. It can correctly read the blobs with mdb_read_ole!!! > > > Yes. > > > However blobs that are longer than will cause an segment fault in > > > mdb_read_ole_next. The problem seems to be that the length calculatio= n > > > is > > > wrong. The following code in mdb_find_pg_row comes back with a len of > > > 0xfffffb9b i.e. A negative number. > > > > > > mdb_swap_pgbuf(mdb); > > > mdb_find_row(mdb, row, off, len); > > > mdb_swap_pgbuf(mdb); > > > > > > In mdb_find_row the next_start is 4096 even though I am not loading t= he > > > blob > > > from the first row. By testing thing I think I figured out that > > > sometimes > > > blobs come with extensions, but the pg_row for the extension (i.e. Th= e > > > next > > > pointer) is zero. Therefore making the following change in > > > mdb_read_ole_next > > > seemed to fix the crash (line 460) > > > > > > if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, > > > &buf, &row_start, &len)) { > > > return 0; > > > } > > > > > > To > > > > > > if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, > > > col->cur_blob_pg_row, > > > &buf, &row_start, &len)) { > > > return 0; > > > } > > > > > > BTW: The CVS version of mdbtools.h has an #include <config.h> which > > > probably > > > won't work from a normal include directory. > > > Thanks, > > > Andy > > > > > > > From: Andreas Richter <mdb...@ri...> > > > > Date: Fri, 30 Sep 2005 11:24:31 -0400 > > > > To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' > > > > <mdb...@ri...> > > > > Cc: <mdb...@li...> > > > > Subject: RE: [mdb-dev] Problem reading blobs > > > > > > > > No success building CVS dump under Darwin. The 0.6pre tar file did > > > > work > > > > correctly under a fink installation, but the autogen.sh file for th= e > > > > CVS > > > > dump did not work. Has anyone done this yet? I'll probably just > > > > execute the > > > > autogen.sh under linux and then move the files to Darwin. Any > > > > suggestions > > > > would be great. > > > > Thanks, > > > > Andy > > > > > > > > -----Original Message----- > > > > From: Sam Moffatt [mailto:pa...@gm...] > > > > Sent: Thursday, September 29, 2005 8:17 AM > > > > To: Andreas Richter > > > > Cc: mdb...@li... > > > > Subject: Re: [mdb-dev] Problem reading blobs > > > > > > > > what version of the library and as others will say, the cvs version > > > > has heaps of improvements in it. > > > > > > > > On 9/29/05, Andreas Richter <mdb...@ri...> wrote: > > > >> > > > >> > > > >> Hi, > > > >> > > > >> I am trying to read an access 2k database with blobs. After readin= g > > > >> the > > > > blob > > > >> with mdb_read_ole it will give me a 4k of data. Calling > > > >> mdb_read_ole_next > > > >> will always tell me that there is no more data. Is mdb_read_ole > > > >> currently > > > >> implemented for normal blobs? Can I help implementing it? I am usi= ng > > > >> the > > > >> library under Mac OS X. > > > >> > > > >> Thanks > > > >> > > > >> Andy > > > >> > > > > > > > > > > > > > > > > > > > ------------------------------------------------------- > > > This SF.Net email is sponsored by: > > > Power Architecture Resource Center: Free content, downloads, > > > discussions, > > > and more. http://solutions.newsforge.com/ibmarch.tmpl > > > _______________________________________________ > > > mdbtools-dev mailing list > > > mdb...@li... > > > https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > > > > > > > > ------------------------------------------------------- > > This SF.Net email is sponsored by: > > Power Architecture Resource Center: Free content, downloads, discussion= s, > > and more. http://solutions.newsforge.com/ibmarch.tmpl > > _______________________________________________ > > mdbtools-dev mailing list > > mdb...@li... > > https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > > > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: > Power Architecture Resource Center: Free content, downloads, discussions, > and more. http://solutions.newsforge.com/ibmarch.tmpl > _______________________________________________ > mdbtools-dev mailing list > mdb...@li... > https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > |
From: Andreas R. <mdb...@ri...> - 2005-10-03 13:14:16
|
I can find some of the code, but don't have access from it at my current location... Basically mdb_bind_column to a buffer of size MDB_BIND_SIZE Then after fetching the row. Copy this data into a size MDB_MEMO_OVERHEAD and use the new buffer in the call to mdb_read_ole. This call will then copy the data into the original bound buffer and return the size for you. Copy that data into yet another buffer. Then call mdb_read_ole_next to see if there is more data. If there is it will again be copied to the bound buffer. You can append it to the data you already stored away. Then call mdb_read_ole_next again until it returns 0. I'll paste some code here when I get home tonight. There is one little snippet of code in the sources that also uses mdb_read_ole. That's where I figured out how to use this. Just grep for mdb_read_ole in all the sources. Thanks, Andy ----- Original Message ----- From: "Shanon Mulley" <sha...@gm...> To: "Brian A. Seklecki" <lav...@sp...> Cc: "Andreas Richter" <mdb...@ri...>; "Sam Moffatt" <pa...@gm...>; <mdb...@li...> Sent: Monday, October 03, 2005 8:23 AM Subject: Re: [mdb-dev] Success - kind of... Andreas, You seem to be having more success with reading binary data from access then myself. I'm wondering if you can give me a few pointers with getting binary data out. I've been teaching myself how to use mdbtools, but am yet to work this out. You mention a command "mdb_read_ole". Can you tell me where this is used? I'm using the latest version (from cvs), but didnt enable odbc - that wouldnt make properly. Thanks. On 10/3/05, Brian A. Seklecki <lav...@sp...> wrote: > Specifically which versions of the autotools did you use, just out of > curiosity? > > ~BAS > > On Fri, 2005-09-30 at 20:59, Andreas Richter wrote: > > Hi, > > After beating autoconf, libtool, autogen and so on into pulp I have the > > CVS > > version running. It can correctly read the blobs with mdb_read_ole!!! > > Yes. > > However blobs that are longer than will cause an segment fault in > > mdb_read_ole_next. The problem seems to be that the length calculation > > is > > wrong. The following code in mdb_find_pg_row comes back with a len of > > 0xfffffb9b i.e. A negative number. > > > > mdb_swap_pgbuf(mdb); > > mdb_find_row(mdb, row, off, len); > > mdb_swap_pgbuf(mdb); > > > > In mdb_find_row the next_start is 4096 even though I am not loading the > > blob > > from the first row. By testing thing I think I figured out that > > sometimes > > blobs come with extensions, but the pg_row for the extension (i.e. The > > next > > pointer) is zero. Therefore making the following change in > > mdb_read_ole_next > > seemed to fix the crash (line 460) > > > > if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, > > &buf, &row_start, &len)) { > > return 0; > > } > > > > To > > > > if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, > > col->cur_blob_pg_row, > > &buf, &row_start, &len)) { > > return 0; > > } > > > > BTW: The CVS version of mdbtools.h has an #include <config.h> which > > probably > > won't work from a normal include directory. > > Thanks, > > Andy > > > > > From: Andreas Richter <mdb...@ri...> > > > Date: Fri, 30 Sep 2005 11:24:31 -0400 > > > To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' > > > <mdb...@ri...> > > > Cc: <mdb...@li...> > > > Subject: RE: [mdb-dev] Problem reading blobs > > > > > > No success building CVS dump under Darwin. The 0.6pre tar file did > > > work > > > correctly under a fink installation, but the autogen.sh file for the > > > CVS > > > dump did not work. Has anyone done this yet? I'll probably just > > > execute the > > > autogen.sh under linux and then move the files to Darwin. Any > > > suggestions > > > would be great. > > > Thanks, > > > Andy > > > > > > -----Original Message----- > > > From: Sam Moffatt [mailto:pa...@gm...] > > > Sent: Thursday, September 29, 2005 8:17 AM > > > To: Andreas Richter > > > Cc: mdb...@li... > > > Subject: Re: [mdb-dev] Problem reading blobs > > > > > > what version of the library and as others will say, the cvs version > > > has heaps of improvements in it. > > > > > > On 9/29/05, Andreas Richter <mdb...@ri...> wrote: > > >> > > >> > > >> Hi, > > >> > > >> I am trying to read an access 2k database with blobs. After reading > > >> the > > > blob > > >> with mdb_read_ole it will give me a 4k of data. Calling > > >> mdb_read_ole_next > > >> will always tell me that there is no more data. Is mdb_read_ole > > >> currently > > >> implemented for normal blobs? Can I help implementing it? I am using > > >> the > > >> library under Mac OS X. > > >> > > >> Thanks > > >> > > >> Andy > > >> > > > > > > > > > > > > > ------------------------------------------------------- > > This SF.Net email is sponsored by: > > Power Architecture Resource Center: Free content, downloads, > > discussions, > > and more. http://solutions.newsforge.com/ibmarch.tmpl > > _______________________________________________ > > mdbtools-dev mailing list > > mdb...@li... > > https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: > Power Architecture Resource Center: Free content, downloads, discussions, > and more. http://solutions.newsforge.com/ibmarch.tmpl > _______________________________________________ > mdbtools-dev mailing list > mdb...@li... > https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > |
From: Shanon M. <sha...@gm...> - 2005-10-03 12:23:49
|
Andreas, You seem to be having more success with reading binary data from access then myself. I'm wondering if you can give me a few pointers with getting binary data out. I've been teaching myself how to use mdbtools, but am yet to work this out. You mention a command "mdb_read_ole". Can you tell me where this is used? I'm using the latest version (from cvs), but didnt enable odbc - that wouldnt make properly. Thanks. On 10/3/05, Brian A. Seklecki <lav...@sp...> wrote: > Specifically which versions of the autotools did you use, just out of > curiosity? > > ~BAS > > On Fri, 2005-09-30 at 20:59, Andreas Richter wrote: > > Hi, > > After beating autoconf, libtool, autogen and so on into pulp I have the= CVS > > version running. It can correctly read the blobs with mdb_read_ole!!! Y= es. > > However blobs that are longer than will cause an segment fault in > > mdb_read_ole_next. The problem seems to be that the length calculation = is > > wrong. The following code in mdb_find_pg_row comes back with a len of > > 0xfffffb9b i.e. A negative number. > > > > mdb_swap_pgbuf(mdb); > > mdb_find_row(mdb, row, off, len); > > mdb_swap_pgbuf(mdb); > > > > In mdb_find_row the next_start is 4096 even though I am not loading the= blob > > from the first row. By testing thing I think I figured out that sometim= es > > blobs come with extensions, but the pg_row for the extension (i.e. The = next > > pointer) is zero. Therefore making the following change in mdb_read_ole= _next > > seemed to fix the crash (line 460) > > > > if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, > > &buf, &row_start, &len)) { > > return 0; > > } > > > > To > > > > if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, > > col->cur_blob_pg_row, > > &buf, &row_start, &len)) { > > return 0; > > } > > > > BTW: The CVS version of mdbtools.h has an #include <config.h> which pro= bably > > won't work from a normal include directory. > > Thanks, > > Andy > > > > > From: Andreas Richter <mdb...@ri...> > > > Date: Fri, 30 Sep 2005 11:24:31 -0400 > > > To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' > > > <mdb...@ri...> > > > Cc: <mdb...@li...> > > > Subject: RE: [mdb-dev] Problem reading blobs > > > > > > No success building CVS dump under Darwin. The 0.6pre tar file did wo= rk > > > correctly under a fink installation, but the autogen.sh file for the = CVS > > > dump did not work. Has anyone done this yet? I'll probably just execu= te the > > > autogen.sh under linux and then move the files to Darwin. Any suggest= ions > > > would be great. > > > Thanks, > > > Andy > > > > > > -----Original Message----- > > > From: Sam Moffatt [mailto:pa...@gm...] > > > Sent: Thursday, September 29, 2005 8:17 AM > > > To: Andreas Richter > > > Cc: mdb...@li... > > > Subject: Re: [mdb-dev] Problem reading blobs > > > > > > what version of the library and as others will say, the cvs version > > > has heaps of improvements in it. > > > > > > On 9/29/05, Andreas Richter <mdb...@ri...> wrote: > > >> > > >> > > >> Hi, > > >> > > >> I am trying to read an access 2k database with blobs. After reading = the > > > blob > > >> with mdb_read_ole it will give me a 4k of data. Calling mdb_read_ole= _next > > >> will always tell me that there is no more data. Is mdb_read_ole curr= ently > > >> implemented for normal blobs? Can I help implementing it? I am using= the > > >> library under Mac OS X. > > >> > > >> Thanks > > >> > > >> Andy > > >> > > > > > > > > > > > > > ------------------------------------------------------- > > This SF.Net email is sponsored by: > > Power Architecture Resource Center: Free content, downloads, discussion= s, > > and more. http://solutions.newsforge.com/ibmarch.tmpl > > _______________________________________________ > > mdbtools-dev mailing list > > mdb...@li... > > https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: > Power Architecture Resource Center: Free content, downloads, discussions, > and more. http://solutions.newsforge.com/ibmarch.tmpl > _______________________________________________ > mdbtools-dev mailing list > mdb...@li... > https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > |
From: Brian A. S. <lav...@sp...> - 2005-10-02 23:24:57
|
Specifically which versions of the autotools did you use, just out of curiosity? ~BAS On Fri, 2005-09-30 at 20:59, Andreas Richter wrote: > Hi, > After beating autoconf, libtool, autogen and so on into pulp I have the CVS > version running. It can correctly read the blobs with mdb_read_ole!!! Yes. > However blobs that are longer than will cause an segment fault in > mdb_read_ole_next. The problem seems to be that the length calculation is > wrong. The following code in mdb_find_pg_row comes back with a len of > 0xfffffb9b i.e. A negative number. > > mdb_swap_pgbuf(mdb); > mdb_find_row(mdb, row, off, len); > mdb_swap_pgbuf(mdb); > > In mdb_find_row the next_start is 4096 even though I am not loading the blob > from the first row. By testing thing I think I figured out that sometimes > blobs come with extensions, but the pg_row for the extension (i.e. The next > pointer) is zero. Therefore making the following change in mdb_read_ole_next > seemed to fix the crash (line 460) > > if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, > &buf, &row_start, &len)) { > return 0; > } > > To > > if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, > col->cur_blob_pg_row, > &buf, &row_start, &len)) { > return 0; > } > > BTW: The CVS version of mdbtools.h has an #include <config.h> which probably > won't work from a normal include directory. > Thanks, > Andy > > > From: Andreas Richter <mdb...@ri...> > > Date: Fri, 30 Sep 2005 11:24:31 -0400 > > To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' > > <mdb...@ri...> > > Cc: <mdb...@li...> > > Subject: RE: [mdb-dev] Problem reading blobs > > > > No success building CVS dump under Darwin. The 0.6pre tar file did work > > correctly under a fink installation, but the autogen.sh file for the CVS > > dump did not work. Has anyone done this yet? I'll probably just execute the > > autogen.sh under linux and then move the files to Darwin. Any suggestions > > would be great. > > Thanks, > > Andy > > > > -----Original Message----- > > From: Sam Moffatt [mailto:pa...@gm...] > > Sent: Thursday, September 29, 2005 8:17 AM > > To: Andreas Richter > > Cc: mdb...@li... > > Subject: Re: [mdb-dev] Problem reading blobs > > > > what version of the library and as others will say, the cvs version > > has heaps of improvements in it. > > > > On 9/29/05, Andreas Richter <mdb...@ri...> wrote: > >> > >> > >> Hi, > >> > >> I am trying to read an access 2k database with blobs. After reading the > > blob > >> with mdb_read_ole it will give me a 4k of data. Calling mdb_read_ole_next > >> will always tell me that there is no more data. Is mdb_read_ole currently > >> implemented for normal blobs? Can I help implementing it? I am using the > >> library under Mac OS X. > >> > >> Thanks > >> > >> Andy > >> > > > > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: > Power Architecture Resource Center: Free content, downloads, discussions, > and more. http://solutions.newsforge.com/ibmarch.tmpl > _______________________________________________ > mdbtools-dev mailing list > mdb...@li... > https://lists.sourceforge.net/lists/listinfo/mdbtools-dev |
From: Andreas R. <mdb...@ri...> - 2005-10-02 22:27:32
|
Hi, After beating autoconf, libtool, autogen and so on into pulp I have the CVS version running. It can correctly read the blobs with mdb_read_ole!!! Yes. However blobs that are longer than will cause an segment fault in mdb_read_ole_next. The problem seems to be that the length calculation is wrong. The following code in mdb_find_pg_row comes back with a len of 0xfffffb9b i.e. A negative number. mdb_swap_pgbuf(mdb); mdb_find_row(mdb, row, off, len); mdb_swap_pgbuf(mdb); In mdb_find_row the next_start is 4096 even though I am not loading the blob from the first row. By testing thing I think I figured out that sometimes blobs come with extensions, but the pg_row for the extension (i.e. The next pointer) is zero. Therefore making the following change in mdb_read_ole_next seemed to fix the crash (line 460) if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, &buf, &row_start, &len)) { return 0; } To if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, col->cur_blob_pg_row, &buf, &row_start, &len)) { return 0; } BTW: The CVS version of mdbtools.h has an #include <config.h> which probably won't work from a normal include directory. Thanks, Andy > From: Andreas Richter <mdb...@ri...> > Date: Fri, 30 Sep 2005 11:24:31 -0400 > To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' > <mdb...@ri...> > Cc: <mdb...@li...> > Subject: RE: [mdb-dev] Problem reading blobs > > No success building CVS dump under Darwin. The 0.6pre tar file did work > correctly under a fink installation, but the autogen.sh file for the CVS > dump did not work. Has anyone done this yet? I'll probably just execute the > autogen.sh under linux and then move the files to Darwin. Any suggestions > would be great. > Thanks, > Andy > > -----Original Message----- > From: Sam Moffatt [mailto:pa...@gm...] > Sent: Thursday, September 29, 2005 8:17 AM > To: Andreas Richter > Cc: mdb...@li... > Subject: Re: [mdb-dev] Problem reading blobs > > what version of the library and as others will say, the cvs version > has heaps of improvements in it. > > On 9/29/05, Andreas Richter <mdb...@ri...> wrote: >> >> >> Hi, >> >> I am trying to read an access 2k database with blobs. After reading the > blob >> with mdb_read_ole it will give me a 4k of data. Calling mdb_read_ole_next >> will always tell me that there is no more data. Is mdb_read_ole currently >> implemented for normal blobs? Can I help implementing it? I am using the >> library under Mac OS X. >> >> Thanks >> >> Andy >> > |
From: Andreas R. <mdb...@ri...> - 2005-09-30 15:24:40
|
No success building CVS dump under Darwin. The 0.6pre tar file did work correctly under a fink installation, but the autogen.sh file for the CVS dump did not work. Has anyone done this yet? I'll probably just execute the autogen.sh under linux and then move the files to Darwin. Any suggestions would be great. Thanks, Andy -----Original Message----- From: Sam Moffatt [mailto:pa...@gm...] Sent: Thursday, September 29, 2005 8:17 AM To: Andreas Richter Cc: mdb...@li... Subject: Re: [mdb-dev] Problem reading blobs what version of the library and as others will say, the cvs version has heaps of improvements in it. On 9/29/05, Andreas Richter <mdb...@ri...> wrote: > > > Hi, > > I am trying to read an access 2k database with blobs. After reading the blob > with mdb_read_ole it will give me a 4k of data. Calling mdb_read_ole_next > will always tell me that there is no more data. Is mdb_read_ole currently > implemented for normal blobs? Can I help implementing it? I am using the > library under Mac OS X. > > Thanks > > Andy > |
From: Andreas R. <an...@ri...> - 2005-09-29 12:33:06
|
I downloaded the latest tar file. I can certainly log into cvs and try again. The dump seemed recent so I didn't see the need for the cvs. I'll let you know tomorrow. Thanks, Andy -----Original Message----- From: Sam Moffatt [mailto:pa...@gm...] Sent: Thursday, September 29, 2005 8:17 AM To: Andreas Richter Cc: mdb...@li... Subject: Re: [mdb-dev] Problem reading blobs what version of the library and as others will say, the cvs version has heaps of improvements in it. On 9/29/05, Andreas Richter <mdb...@ri...> wrote: > > > Hi, > > I am trying to read an access 2k database with blobs. After reading the blob > with mdb_read_ole it will give me a 4k of data. Calling mdb_read_ole_next > will always tell me that there is no more data. Is mdb_read_ole currently > implemented for normal blobs? Can I help implementing it? I am using the > library under Mac OS X. > > Thanks > > Andy > |
From: Sam M. <pa...@gm...> - 2005-09-29 12:16:54
|
what version of the library and as others will say, the cvs version has heaps of improvements in it. On 9/29/05, Andreas Richter <mdb...@ri...> wrote: > > > Hi, > > I am trying to read an access 2k database with blobs. After reading the b= lob > with mdb_read_ole it will give me a 4k of data. Calling mdb_read_ole_next > will always tell me that there is no more data. Is mdb_read_ole currently > implemented for normal blobs? Can I help implementing it? I am using the > library under Mac OS X. > > Thanks > > Andy > |
From: Andreas R. <mdb...@ri...> - 2005-09-29 11:46:29
|
Hi, I am trying to read an access 2k database with blobs. After reading the blob with mdb_read_ole it will give me a 4k of data. Calling mdb_read_ole_next will always tell me that there is no more data. Is mdb_read_ole currently implemented for normal blobs? Can I help implementing it? I am using the library under Mac OS X. Thanks Andy |
From: Shanon M. <sha...@gm...> - 2005-09-25 13:02:22
|
I'm trying to install mdbtools-0.6pre1 on Fedora Core 4, but am having some problems. Installing from the source, the "./configure" bit seems to go fine, but when I run "make", I get some unpromising messages, like: ---START--- backend.c:31: error: static declaration of 'mdb_backends' follows non-static declaration ../../include/mdbtools.h:150: error: previous declaration of 'mdb_backends' was here make[2]: *** [backend.lo] Error 1 make[2]: Leaving directory `/home/mythtv/mdbtools-0.6pre1/src/libmdb' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/mythtv/mdbtools-0.6pre1/src' make: *** [all-recursive] Error 1 ---END--- I've seen other postings about FC4, but I'm not sure if these were the same as what I'm having (I'm kind of new to linux, so debugging install routines is pretty tricky for me). Any ideas anyone? Thanks. |
From: Jeff S. <why...@ya...> - 2005-09-25 00:40:08
|
I would guess you are trying to use 0.6pre1. There is a known issue relating to the version of libtool used by the packager. In the top build directory, run 'libtoolize --force --copy'. You should then be able to configure, make, and install as normal. Note that you will also want to remove the non-.so versions of the libraries from /usr/local/lib. --- Jonathan Dixon <dix...@ly...> wrote: > I know that I ran into this once before, but I don't know what I did to fix it: > > When I try to build mdbtools, the libraries are placed into /usr/local/lib without the > .so suffix, so that attempts to reference them fail. I am running Fedora Core 4 (and > happened before with Fedora Core 2). > > Anyone have suggestions on what needs to be done in the configure/make process to make > sure that the proper suffix gets put on the libraries? > > Thanks, > > Jon Dixon > > -- > _______________________________________________ > > Search for businesses by name, location, or phone number. -Lycos Yellow Pages > > http://r.lycos.com/r/yp_emailfooter/http://yellowpages.lycos.com/default.asp?SRC=lycos10 > > > > ------------------------------------------------------- > SF.Net email is sponsored by: > Tame your development challenges with Apache's Geronimo App Server. Download > it for free - -and be entered to win a 42" plasma tv or your very own > Sony(tm)PSP. Click here to play: http://sourceforge.net/geronimo.php > _______________________________________________ > mdbtools-dev mailing list > mdb...@li... > https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > __________________________________ Yahoo! Mail - PC Magazine Editors' Choice 2005 http://mail.yahoo.com |
From: neeraj c. <nee...@ya...> - 2005-09-24 11:56:41
|
Hello everyone, I just wanted to know where shall i find the latest version of mdbtools, because i have downloaded and installed the "mdbtools0.5", but facing many problems regarding the fetching of records from a .mdb file on my linux box. Waiting eagerly for the reply Thanks in advance __________________________________________________________ Yahoo! India Matrimony: Find your partner now. Go to http://yahoo.shaadi.com |