From: Andreas R. <mdb...@ri...> - 2005-09-29 11:46:29
|
Hi, I am trying to read an access 2k database with blobs. After reading the blob with mdb_read_ole it will give me a 4k of data. Calling mdb_read_ole_next will always tell me that there is no more data. Is mdb_read_ole currently implemented for normal blobs? Can I help implementing it? I am using the library under Mac OS X. Thanks Andy |
From: Sam M. <pa...@gm...> - 2005-09-29 12:16:54
|
what version of the library and as others will say, the cvs version has heaps of improvements in it. On 9/29/05, Andreas Richter <mdb...@ri...> wrote: > > > Hi, > > I am trying to read an access 2k database with blobs. After reading the b= lob > with mdb_read_ole it will give me a 4k of data. Calling mdb_read_ole_next > will always tell me that there is no more data. Is mdb_read_ole currently > implemented for normal blobs? Can I help implementing it? I am using the > library under Mac OS X. > > Thanks > > Andy > |
From: Andreas R. <an...@ri...> - 2005-09-29 12:33:06
|
I downloaded the latest tar file. I can certainly log into cvs and try again. The dump seemed recent so I didn't see the need for the cvs. I'll let you know tomorrow. Thanks, Andy -----Original Message----- From: Sam Moffatt [mailto:pa...@gm...] Sent: Thursday, September 29, 2005 8:17 AM To: Andreas Richter Cc: mdb...@li... Subject: Re: [mdb-dev] Problem reading blobs what version of the library and as others will say, the cvs version has heaps of improvements in it. On 9/29/05, Andreas Richter <mdb...@ri...> wrote: > > > Hi, > > I am trying to read an access 2k database with blobs. After reading the blob > with mdb_read_ole it will give me a 4k of data. Calling mdb_read_ole_next > will always tell me that there is no more data. Is mdb_read_ole currently > implemented for normal blobs? Can I help implementing it? I am using the > library under Mac OS X. > > Thanks > > Andy > |
From: Andreas R. <mdb...@ri...> - 2005-09-30 15:24:40
|
No success building CVS dump under Darwin. The 0.6pre tar file did work correctly under a fink installation, but the autogen.sh file for the CVS dump did not work. Has anyone done this yet? I'll probably just execute the autogen.sh under linux and then move the files to Darwin. Any suggestions would be great. Thanks, Andy -----Original Message----- From: Sam Moffatt [mailto:pa...@gm...] Sent: Thursday, September 29, 2005 8:17 AM To: Andreas Richter Cc: mdb...@li... Subject: Re: [mdb-dev] Problem reading blobs what version of the library and as others will say, the cvs version has heaps of improvements in it. On 9/29/05, Andreas Richter <mdb...@ri...> wrote: > > > Hi, > > I am trying to read an access 2k database with blobs. After reading the blob > with mdb_read_ole it will give me a 4k of data. Calling mdb_read_ole_next > will always tell me that there is no more data. Is mdb_read_ole currently > implemented for normal blobs? Can I help implementing it? I am using the > library under Mac OS X. > > Thanks > > Andy > |
From: Andreas R. <mdb...@ri...> - 2005-10-02 22:27:32
|
Hi, After beating autoconf, libtool, autogen and so on into pulp I have the CVS version running. It can correctly read the blobs with mdb_read_ole!!! Yes. However blobs that are longer than will cause an segment fault in mdb_read_ole_next. The problem seems to be that the length calculation is wrong. The following code in mdb_find_pg_row comes back with a len of 0xfffffb9b i.e. A negative number. mdb_swap_pgbuf(mdb); mdb_find_row(mdb, row, off, len); mdb_swap_pgbuf(mdb); In mdb_find_row the next_start is 4096 even though I am not loading the blob from the first row. By testing thing I think I figured out that sometimes blobs come with extensions, but the pg_row for the extension (i.e. The next pointer) is zero. Therefore making the following change in mdb_read_ole_next seemed to fix the crash (line 460) if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, &buf, &row_start, &len)) { return 0; } To if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, col->cur_blob_pg_row, &buf, &row_start, &len)) { return 0; } BTW: The CVS version of mdbtools.h has an #include <config.h> which probably won't work from a normal include directory. Thanks, Andy > From: Andreas Richter <mdb...@ri...> > Date: Fri, 30 Sep 2005 11:24:31 -0400 > To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' > <mdb...@ri...> > Cc: <mdb...@li...> > Subject: RE: [mdb-dev] Problem reading blobs > > No success building CVS dump under Darwin. The 0.6pre tar file did work > correctly under a fink installation, but the autogen.sh file for the CVS > dump did not work. Has anyone done this yet? I'll probably just execute the > autogen.sh under linux and then move the files to Darwin. Any suggestions > would be great. > Thanks, > Andy > > -----Original Message----- > From: Sam Moffatt [mailto:pa...@gm...] > Sent: Thursday, September 29, 2005 8:17 AM > To: Andreas Richter > Cc: mdb...@li... > Subject: Re: [mdb-dev] Problem reading blobs > > what version of the library and as others will say, the cvs version > has heaps of improvements in it. > > On 9/29/05, Andreas Richter <mdb...@ri...> wrote: >> >> >> Hi, >> >> I am trying to read an access 2k database with blobs. After reading the > blob >> with mdb_read_ole it will give me a 4k of data. Calling mdb_read_ole_next >> will always tell me that there is no more data. Is mdb_read_ole currently >> implemented for normal blobs? Can I help implementing it? I am using the >> library under Mac OS X. >> >> Thanks >> >> Andy >> > |
From: Brian A. S. <lav...@sp...> - 2005-10-02 23:24:57
|
Specifically which versions of the autotools did you use, just out of curiosity? ~BAS On Fri, 2005-09-30 at 20:59, Andreas Richter wrote: > Hi, > After beating autoconf, libtool, autogen and so on into pulp I have the CVS > version running. It can correctly read the blobs with mdb_read_ole!!! Yes. > However blobs that are longer than will cause an segment fault in > mdb_read_ole_next. The problem seems to be that the length calculation is > wrong. The following code in mdb_find_pg_row comes back with a len of > 0xfffffb9b i.e. A negative number. > > mdb_swap_pgbuf(mdb); > mdb_find_row(mdb, row, off, len); > mdb_swap_pgbuf(mdb); > > In mdb_find_row the next_start is 4096 even though I am not loading the blob > from the first row. By testing thing I think I figured out that sometimes > blobs come with extensions, but the pg_row for the extension (i.e. The next > pointer) is zero. Therefore making the following change in mdb_read_ole_next > seemed to fix the crash (line 460) > > if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, > &buf, &row_start, &len)) { > return 0; > } > > To > > if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, > col->cur_blob_pg_row, > &buf, &row_start, &len)) { > return 0; > } > > BTW: The CVS version of mdbtools.h has an #include <config.h> which probably > won't work from a normal include directory. > Thanks, > Andy > > > From: Andreas Richter <mdb...@ri...> > > Date: Fri, 30 Sep 2005 11:24:31 -0400 > > To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' > > <mdb...@ri...> > > Cc: <mdb...@li...> > > Subject: RE: [mdb-dev] Problem reading blobs > > > > No success building CVS dump under Darwin. The 0.6pre tar file did work > > correctly under a fink installation, but the autogen.sh file for the CVS > > dump did not work. Has anyone done this yet? I'll probably just execute the > > autogen.sh under linux and then move the files to Darwin. Any suggestions > > would be great. > > Thanks, > > Andy > > > > -----Original Message----- > > From: Sam Moffatt [mailto:pa...@gm...] > > Sent: Thursday, September 29, 2005 8:17 AM > > To: Andreas Richter > > Cc: mdb...@li... > > Subject: Re: [mdb-dev] Problem reading blobs > > > > what version of the library and as others will say, the cvs version > > has heaps of improvements in it. > > > > On 9/29/05, Andreas Richter <mdb...@ri...> wrote: > >> > >> > >> Hi, > >> > >> I am trying to read an access 2k database with blobs. After reading the > > blob > >> with mdb_read_ole it will give me a 4k of data. Calling mdb_read_ole_next > >> will always tell me that there is no more data. Is mdb_read_ole currently > >> implemented for normal blobs? Can I help implementing it? I am using the > >> library under Mac OS X. > >> > >> Thanks > >> > >> Andy > >> > > > > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: > Power Architecture Resource Center: Free content, downloads, discussions, > and more. http://solutions.newsforge.com/ibmarch.tmpl > _______________________________________________ > mdbtools-dev mailing list > mdb...@li... > https://lists.sourceforge.net/lists/listinfo/mdbtools-dev |
From: Shanon M. <sha...@gm...> - 2005-10-03 12:23:49
|
Andreas, You seem to be having more success with reading binary data from access then myself. I'm wondering if you can give me a few pointers with getting binary data out. I've been teaching myself how to use mdbtools, but am yet to work this out. You mention a command "mdb_read_ole". Can you tell me where this is used? I'm using the latest version (from cvs), but didnt enable odbc - that wouldnt make properly. Thanks. On 10/3/05, Brian A. Seklecki <lav...@sp...> wrote: > Specifically which versions of the autotools did you use, just out of > curiosity? > > ~BAS > > On Fri, 2005-09-30 at 20:59, Andreas Richter wrote: > > Hi, > > After beating autoconf, libtool, autogen and so on into pulp I have the= CVS > > version running. It can correctly read the blobs with mdb_read_ole!!! Y= es. > > However blobs that are longer than will cause an segment fault in > > mdb_read_ole_next. The problem seems to be that the length calculation = is > > wrong. The following code in mdb_find_pg_row comes back with a len of > > 0xfffffb9b i.e. A negative number. > > > > mdb_swap_pgbuf(mdb); > > mdb_find_row(mdb, row, off, len); > > mdb_swap_pgbuf(mdb); > > > > In mdb_find_row the next_start is 4096 even though I am not loading the= blob > > from the first row. By testing thing I think I figured out that sometim= es > > blobs come with extensions, but the pg_row for the extension (i.e. The = next > > pointer) is zero. Therefore making the following change in mdb_read_ole= _next > > seemed to fix the crash (line 460) > > > > if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, > > &buf, &row_start, &len)) { > > return 0; > > } > > > > To > > > > if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, > > col->cur_blob_pg_row, > > &buf, &row_start, &len)) { > > return 0; > > } > > > > BTW: The CVS version of mdbtools.h has an #include <config.h> which pro= bably > > won't work from a normal include directory. > > Thanks, > > Andy > > > > > From: Andreas Richter <mdb...@ri...> > > > Date: Fri, 30 Sep 2005 11:24:31 -0400 > > > To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' > > > <mdb...@ri...> > > > Cc: <mdb...@li...> > > > Subject: RE: [mdb-dev] Problem reading blobs > > > > > > No success building CVS dump under Darwin. The 0.6pre tar file did wo= rk > > > correctly under a fink installation, but the autogen.sh file for the = CVS > > > dump did not work. Has anyone done this yet? I'll probably just execu= te the > > > autogen.sh under linux and then move the files to Darwin. Any suggest= ions > > > would be great. > > > Thanks, > > > Andy > > > > > > -----Original Message----- > > > From: Sam Moffatt [mailto:pa...@gm...] > > > Sent: Thursday, September 29, 2005 8:17 AM > > > To: Andreas Richter > > > Cc: mdb...@li... > > > Subject: Re: [mdb-dev] Problem reading blobs > > > > > > what version of the library and as others will say, the cvs version > > > has heaps of improvements in it. > > > > > > On 9/29/05, Andreas Richter <mdb...@ri...> wrote: > > >> > > >> > > >> Hi, > > >> > > >> I am trying to read an access 2k database with blobs. After reading = the > > > blob > > >> with mdb_read_ole it will give me a 4k of data. Calling mdb_read_ole= _next > > >> will always tell me that there is no more data. Is mdb_read_ole curr= ently > > >> implemented for normal blobs? Can I help implementing it? I am using= the > > >> library under Mac OS X. > > >> > > >> Thanks > > >> > > >> Andy > > >> > > > > > > > > > > > > > ------------------------------------------------------- > > This SF.Net email is sponsored by: > > Power Architecture Resource Center: Free content, downloads, discussion= s, > > and more. http://solutions.newsforge.com/ibmarch.tmpl > > _______________________________________________ > > mdbtools-dev mailing list > > mdb...@li... > > https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: > Power Architecture Resource Center: Free content, downloads, discussions, > and more. http://solutions.newsforge.com/ibmarch.tmpl > _______________________________________________ > mdbtools-dev mailing list > mdb...@li... > https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > |
From: Andreas R. <mdb...@ri...> - 2005-10-03 13:14:16
|
I can find some of the code, but don't have access from it at my current location... Basically mdb_bind_column to a buffer of size MDB_BIND_SIZE Then after fetching the row. Copy this data into a size MDB_MEMO_OVERHEAD and use the new buffer in the call to mdb_read_ole. This call will then copy the data into the original bound buffer and return the size for you. Copy that data into yet another buffer. Then call mdb_read_ole_next to see if there is more data. If there is it will again be copied to the bound buffer. You can append it to the data you already stored away. Then call mdb_read_ole_next again until it returns 0. I'll paste some code here when I get home tonight. There is one little snippet of code in the sources that also uses mdb_read_ole. That's where I figured out how to use this. Just grep for mdb_read_ole in all the sources. Thanks, Andy ----- Original Message ----- From: "Shanon Mulley" <sha...@gm...> To: "Brian A. Seklecki" <lav...@sp...> Cc: "Andreas Richter" <mdb...@ri...>; "Sam Moffatt" <pa...@gm...>; <mdb...@li...> Sent: Monday, October 03, 2005 8:23 AM Subject: Re: [mdb-dev] Success - kind of... Andreas, You seem to be having more success with reading binary data from access then myself. I'm wondering if you can give me a few pointers with getting binary data out. I've been teaching myself how to use mdbtools, but am yet to work this out. You mention a command "mdb_read_ole". Can you tell me where this is used? I'm using the latest version (from cvs), but didnt enable odbc - that wouldnt make properly. Thanks. On 10/3/05, Brian A. Seklecki <lav...@sp...> wrote: > Specifically which versions of the autotools did you use, just out of > curiosity? > > ~BAS > > On Fri, 2005-09-30 at 20:59, Andreas Richter wrote: > > Hi, > > After beating autoconf, libtool, autogen and so on into pulp I have the > > CVS > > version running. It can correctly read the blobs with mdb_read_ole!!! > > Yes. > > However blobs that are longer than will cause an segment fault in > > mdb_read_ole_next. The problem seems to be that the length calculation > > is > > wrong. The following code in mdb_find_pg_row comes back with a len of > > 0xfffffb9b i.e. A negative number. > > > > mdb_swap_pgbuf(mdb); > > mdb_find_row(mdb, row, off, len); > > mdb_swap_pgbuf(mdb); > > > > In mdb_find_row the next_start is 4096 even though I am not loading the > > blob > > from the first row. By testing thing I think I figured out that > > sometimes > > blobs come with extensions, but the pg_row for the extension (i.e. The > > next > > pointer) is zero. Therefore making the following change in > > mdb_read_ole_next > > seemed to fix the crash (line 460) > > > > if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, > > &buf, &row_start, &len)) { > > return 0; > > } > > > > To > > > > if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, > > col->cur_blob_pg_row, > > &buf, &row_start, &len)) { > > return 0; > > } > > > > BTW: The CVS version of mdbtools.h has an #include <config.h> which > > probably > > won't work from a normal include directory. > > Thanks, > > Andy > > > > > From: Andreas Richter <mdb...@ri...> > > > Date: Fri, 30 Sep 2005 11:24:31 -0400 > > > To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' > > > <mdb...@ri...> > > > Cc: <mdb...@li...> > > > Subject: RE: [mdb-dev] Problem reading blobs > > > > > > No success building CVS dump under Darwin. The 0.6pre tar file did > > > work > > > correctly under a fink installation, but the autogen.sh file for the > > > CVS > > > dump did not work. Has anyone done this yet? I'll probably just > > > execute the > > > autogen.sh under linux and then move the files to Darwin. Any > > > suggestions > > > would be great. > > > Thanks, > > > Andy > > > > > > -----Original Message----- > > > From: Sam Moffatt [mailto:pa...@gm...] > > > Sent: Thursday, September 29, 2005 8:17 AM > > > To: Andreas Richter > > > Cc: mdb...@li... > > > Subject: Re: [mdb-dev] Problem reading blobs > > > > > > what version of the library and as others will say, the cvs version > > > has heaps of improvements in it. > > > > > > On 9/29/05, Andreas Richter <mdb...@ri...> wrote: > > >> > > >> > > >> Hi, > > >> > > >> I am trying to read an access 2k database with blobs. After reading > > >> the > > > blob > > >> with mdb_read_ole it will give me a 4k of data. Calling > > >> mdb_read_ole_next > > >> will always tell me that there is no more data. Is mdb_read_ole > > >> currently > > >> implemented for normal blobs? Can I help implementing it? I am using > > >> the > > >> library under Mac OS X. > > >> > > >> Thanks > > >> > > >> Andy > > >> > > > > > > > > > > > > > ------------------------------------------------------- > > This SF.Net email is sponsored by: > > Power Architecture Resource Center: Free content, downloads, > > discussions, > > and more. http://solutions.newsforge.com/ibmarch.tmpl > > _______________________________________________ > > mdbtools-dev mailing list > > mdb...@li... > > https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: > Power Architecture Resource Center: Free content, downloads, discussions, > and more. http://solutions.newsforge.com/ibmarch.tmpl > _______________________________________________ > mdbtools-dev mailing list > mdb...@li... > https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > |
From: Shanon M. <sha...@gm...> - 2005-10-04 00:28:19
|
So, is this to say that none of the standard mdbtools does this (like mdb-export, or mdb-sql) - I have to play around with the source? I'm looking forward to seeing your snippet of code, although I'm thinking it might break my brain getting it to work. I'm a recent convert to linux, and so far have avoided playing around with source code, but I'm willing to give it a go if need be. On 10/3/05, Andreas Richter <mdb...@ri...> wrote: > I can find some of the code, but don't have access from it at my current > location... > Basically mdb_bind_column to a buffer of size MDB_BIND_SIZE > Then after fetching the row. Copy this data into a size MDB_MEMO_OVERHEAD > and use the new buffer in the call to mdb_read_ole. This call will then c= opy > the data into the original bound buffer and return the size for you. Copy > that data into yet another buffer. Then call mdb_read_ole_next to see if > there is more data. If there is it will again be copied to the bound buff= er. > You can append it to the data you already stored away. Then call > mdb_read_ole_next again until it returns 0. I'll paste some code here whe= n I > get home tonight. > There is one little snippet of code in the sources that also uses > mdb_read_ole. That's where I figured out how to use this. Just grep for > mdb_read_ole in all the sources. > Thanks, > Andy > ----- Original Message ----- > From: "Shanon Mulley" <sha...@gm...> > To: "Brian A. Seklecki" <lav...@sp...> > Cc: "Andreas Richter" <mdb...@ri...>; "Sam Moffatt" > <pa...@gm...>; <mdb...@li...> > Sent: Monday, October 03, 2005 8:23 AM > Subject: Re: [mdb-dev] Success - kind of... > > > Andreas, > > You seem to be having more success with reading binary data from > access then myself. I'm wondering if you can give me a few pointers > with getting binary data out. I've been teaching myself how to use > mdbtools, but am yet to work this out. You mention a command > "mdb_read_ole". Can you tell me where this is used? > > I'm using the latest version (from cvs), but didnt enable odbc - that > wouldnt make properly. > > Thanks. > > On 10/3/05, Brian A. Seklecki <lav...@sp...> wrote: > > Specifically which versions of the autotools did you use, just out of > > curiosity? > > > > ~BAS > > > > On Fri, 2005-09-30 at 20:59, Andreas Richter wrote: > > > Hi, > > > After beating autoconf, libtool, autogen and so on into pulp I have t= he > > > CVS > > > version running. It can correctly read the blobs with mdb_read_ole!!! > > > Yes. > > > However blobs that are longer than will cause an segment fault in > > > mdb_read_ole_next. The problem seems to be that the length calculatio= n > > > is > > > wrong. The following code in mdb_find_pg_row comes back with a len of > > > 0xfffffb9b i.e. A negative number. > > > > > > mdb_swap_pgbuf(mdb); > > > mdb_find_row(mdb, row, off, len); > > > mdb_swap_pgbuf(mdb); > > > > > > In mdb_find_row the next_start is 4096 even though I am not loading t= he > > > blob > > > from the first row. By testing thing I think I figured out that > > > sometimes > > > blobs come with extensions, but the pg_row for the extension (i.e. Th= e > > > next > > > pointer) is zero. Therefore making the following change in > > > mdb_read_ole_next > > > seemed to fix the crash (line 460) > > > > > > if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, > > > &buf, &row_start, &len)) { > > > return 0; > > > } > > > > > > To > > > > > > if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, > > > col->cur_blob_pg_row, > > > &buf, &row_start, &len)) { > > > return 0; > > > } > > > > > > BTW: The CVS version of mdbtools.h has an #include <config.h> which > > > probably > > > won't work from a normal include directory. > > > Thanks, > > > Andy > > > > > > > From: Andreas Richter <mdb...@ri...> > > > > Date: Fri, 30 Sep 2005 11:24:31 -0400 > > > > To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' > > > > <mdb...@ri...> > > > > Cc: <mdb...@li...> > > > > Subject: RE: [mdb-dev] Problem reading blobs > > > > > > > > No success building CVS dump under Darwin. The 0.6pre tar file did > > > > work > > > > correctly under a fink installation, but the autogen.sh file for th= e > > > > CVS > > > > dump did not work. Has anyone done this yet? I'll probably just > > > > execute the > > > > autogen.sh under linux and then move the files to Darwin. Any > > > > suggestions > > > > would be great. > > > > Thanks, > > > > Andy > > > > > > > > -----Original Message----- > > > > From: Sam Moffatt [mailto:pa...@gm...] > > > > Sent: Thursday, September 29, 2005 8:17 AM > > > > To: Andreas Richter > > > > Cc: mdb...@li... > > > > Subject: Re: [mdb-dev] Problem reading blobs > > > > > > > > what version of the library and as others will say, the cvs version > > > > has heaps of improvements in it. > > > > > > > > On 9/29/05, Andreas Richter <mdb...@ri...> wrote: > > > >> > > > >> > > > >> Hi, > > > >> > > > >> I am trying to read an access 2k database with blobs. After readin= g > > > >> the > > > > blob > > > >> with mdb_read_ole it will give me a 4k of data. Calling > > > >> mdb_read_ole_next > > > >> will always tell me that there is no more data. Is mdb_read_ole > > > >> currently > > > >> implemented for normal blobs? Can I help implementing it? I am usi= ng > > > >> the > > > >> library under Mac OS X. > > > >> > > > >> Thanks > > > >> > > > >> Andy > > > >> > > > > > > > > > > > > > > > > > > > ------------------------------------------------------- > > > This SF.Net email is sponsored by: > > > Power Architecture Resource Center: Free content, downloads, > > > discussions, > > > and more. http://solutions.newsforge.com/ibmarch.tmpl > > > _______________________________________________ > > > mdbtools-dev mailing list > > > mdb...@li... > > > https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > > > > > > > > ------------------------------------------------------- > > This SF.Net email is sponsored by: > > Power Architecture Resource Center: Free content, downloads, discussion= s, > > and more. http://solutions.newsforge.com/ibmarch.tmpl > > _______________________________________________ > > mdbtools-dev mailing list > > mdb...@li... > > https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > > > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: > Power Architecture Resource Center: Free content, downloads, discussions, > and more. http://solutions.newsforge.com/ibmarch.tmpl > _______________________________________________ > mdbtools-dev mailing list > mdb...@li... > https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > |
From: Andreas R. <mdb...@ri...> - 2005-10-04 01:30:35
|
Here is the promised snippet: MdbHandle* mdb = mdb_open("<somepath>", MDB_TABLE); mdb_read_catalog(mdb, MDB_TABLE); MdbTableDef *table = mdb_read_table_by_name(mdb, "<sometablewithblob>", MDB_TABLE); mdb_read_columns(table); mdb_read_indices(table); mdb_rewind_table(table); int blobLen = MDB_BIND_SIZE; gchar blobChunk[MDB_BIND_SIZE]; int colID = mdb_bind_column_by_name(table, "<blobcolumn>", blobChunk, &blobLen); MdbColumn * blobCol = g_ptr_array_index(table->columns, colID - 1); if (mdb_fetch_row(table)) { gchar binder[MDB_MEMO_OVERHEAD]; memcpy(binder, blobChunk, MDB_MEMO_OVERHEAD); int rlen; if (rlen = mdb_ole_read(mdb, blobCol, binder, MDB_BIND_SIZE)) { char* mem = malloc(rlen); int pos = 0; memcpy(&mem[pos], blobChunk, rlen); pos += rlen; while (rlen = mdb_ole_read_next(mdb, blobCol, binder)) { mem = realloc(mem, pos + rlen); memcpy(&mem[pos], blobChunk, rlen); pos += rlen; } // Use the content of mem here. // The length of the data is located // in the pos variable at this point. free(mem); } } > From: Shanon Mulley <sha...@gm...> > Reply-To: Shanon Mulley <sha...@gm...> > Date: Tue, 4 Oct 2005 10:28:04 +1000 > To: Andreas Richter <mdb...@ri...> > Cc: "Brian A. Seklecki" <lav...@sp...>, Sam Moffatt > <pa...@gm...>, <mdb...@li...> > Subject: Re: [mdb-dev] Success - kind of... > > So, is this to say that none of the standard mdbtools does this (like > mdb-export, or mdb-sql) - I have to play around with the source? > > I'm looking forward to seeing your snippet of code, although I'm > thinking it might break my brain getting it to work. I'm a recent > convert to linux, and so far have avoided playing around with source > code, but I'm willing to give it a go if need be. > > On 10/3/05, Andreas Richter <mdb...@ri...> wrote: >> I can find some of the code, but don't have access from it at my current >> location... >> Basically mdb_bind_column to a buffer of size MDB_BIND_SIZE >> Then after fetching the row. Copy this data into a size MDB_MEMO_OVERHEAD >> and use the new buffer in the call to mdb_read_ole. This call will then copy >> the data into the original bound buffer and return the size for you. Copy >> that data into yet another buffer. Then call mdb_read_ole_next to see if >> there is more data. If there is it will again be copied to the bound buffer. >> You can append it to the data you already stored away. Then call >> mdb_read_ole_next again until it returns 0. I'll paste some code here when I >> get home tonight. >> There is one little snippet of code in the sources that also uses >> mdb_read_ole. That's where I figured out how to use this. Just grep for >> mdb_read_ole in all the sources. >> Thanks, >> Andy >> ----- Original Message ----- >> From: "Shanon Mulley" <sha...@gm...> >> To: "Brian A. Seklecki" <lav...@sp...> >> Cc: "Andreas Richter" <mdb...@ri...>; "Sam Moffatt" >> <pa...@gm...>; <mdb...@li...> >> Sent: Monday, October 03, 2005 8:23 AM >> Subject: Re: [mdb-dev] Success - kind of... >> >> >> Andreas, >> >> You seem to be having more success with reading binary data from >> access then myself. I'm wondering if you can give me a few pointers >> with getting binary data out. I've been teaching myself how to use >> mdbtools, but am yet to work this out. You mention a command >> "mdb_read_ole". Can you tell me where this is used? >> >> I'm using the latest version (from cvs), but didnt enable odbc - that >> wouldnt make properly. >> >> Thanks. >> >> On 10/3/05, Brian A. Seklecki <lav...@sp...> wrote: >>> Specifically which versions of the autotools did you use, just out of >>> curiosity? >>> >>> ~BAS >>> >>> On Fri, 2005-09-30 at 20:59, Andreas Richter wrote: >>>> Hi, >>>> After beating autoconf, libtool, autogen and so on into pulp I have the >>>> CVS >>>> version running. It can correctly read the blobs with mdb_read_ole!!! >>>> Yes. >>>> However blobs that are longer than will cause an segment fault in >>>> mdb_read_ole_next. The problem seems to be that the length calculation >>>> is >>>> wrong. The following code in mdb_find_pg_row comes back with a len of >>>> 0xfffffb9b i.e. A negative number. >>>> >>>> mdb_swap_pgbuf(mdb); >>>> mdb_find_row(mdb, row, off, len); >>>> mdb_swap_pgbuf(mdb); >>>> >>>> In mdb_find_row the next_start is 4096 even though I am not loading the >>>> blob >>>> from the first row. By testing thing I think I figured out that >>>> sometimes >>>> blobs come with extensions, but the pg_row for the extension (i.e. The >>>> next >>>> pointer) is zero. Therefore making the following change in >>>> mdb_read_ole_next >>>> seemed to fix the crash (line 460) >>>> >>>> if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, >>>> &buf, &row_start, &len)) { >>>> return 0; >>>> } >>>> >>>> To >>>> >>>> if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, >>>> col->cur_blob_pg_row, >>>> &buf, &row_start, &len)) { >>>> return 0; >>>> } >>>> >>>> BTW: The CVS version of mdbtools.h has an #include <config.h> which >>>> probably >>>> won't work from a normal include directory. >>>> Thanks, >>>> Andy >>>> >>>>> From: Andreas Richter <mdb...@ri...> >>>>> Date: Fri, 30 Sep 2005 11:24:31 -0400 >>>>> To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' >>>>> <mdb...@ri...> >>>>> Cc: <mdb...@li...> >>>>> Subject: RE: [mdb-dev] Problem reading blobs >>>>> >>>>> No success building CVS dump under Darwin. The 0.6pre tar file did >>>>> work >>>>> correctly under a fink installation, but the autogen.sh file for the >>>>> CVS >>>>> dump did not work. Has anyone done this yet? I'll probably just >>>>> execute the >>>>> autogen.sh under linux and then move the files to Darwin. Any >>>>> suggestions >>>>> would be great. >>>>> Thanks, >>>>> Andy >>>>> >>>>> -----Original Message----- >>>>> From: Sam Moffatt [mailto:pa...@gm...] >>>>> Sent: Thursday, September 29, 2005 8:17 AM >>>>> To: Andreas Richter >>>>> Cc: mdb...@li... >>>>> Subject: Re: [mdb-dev] Problem reading blobs >>>>> >>>>> what version of the library and as others will say, the cvs version >>>>> has heaps of improvements in it. >>>>> >>>>> On 9/29/05, Andreas Richter <mdb...@ri...> wrote: >>>>>> >>>>>> >>>>>> Hi, >>>>>> >>>>>> I am trying to read an access 2k database with blobs. After reading >>>>>> the >>>>> blob >>>>>> with mdb_read_ole it will give me a 4k of data. Calling >>>>>> mdb_read_ole_next >>>>>> will always tell me that there is no more data. Is mdb_read_ole >>>>>> currently >>>>>> implemented for normal blobs? Can I help implementing it? I am using >>>>>> the >>>>>> library under Mac OS X. >>>>>> >>>>>> Thanks >>>>>> >>>>>> Andy >>>>>> >>>>> >>>> >>>> >>>> >>>> >>>> ------------------------------------------------------- >>>> This SF.Net email is sponsored by: >>>> Power Architecture Resource Center: Free content, downloads, >>>> discussions, >>>> and more. http://solutions.newsforge.com/ibmarch.tmpl >>>> _______________________________________________ >>>> mdbtools-dev mailing list >>>> mdb...@li... >>>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >>> >>> >>> >>> ------------------------------------------------------- >>> This SF.Net email is sponsored by: >>> Power Architecture Resource Center: Free content, downloads, discussions, >>> and more. http://solutions.newsforge.com/ibmarch.tmpl >>> _______________________________________________ >>> mdbtools-dev mailing list >>> mdb...@li... >>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >>> >> >> >> >> ------------------------------------------------------- >> This SF.Net email is sponsored by: >> Power Architecture Resource Center: Free content, downloads, discussions, >> and more. http://solutions.newsforge.com/ibmarch.tmpl >> _______________________________________________ >> mdbtools-dev mailing list >> mdb...@li... >> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >> |
From: Andreas R. <an...@ri...> - 2005-10-04 01:23:23
|
Here is the promised snippet: MdbHandle* mdb = mdb_open("<somepath>", MDB_TABLE); mdb_read_catalog(mdb, MDB_TABLE); MdbTableDef *table = mdb_read_table_by_name(mdb, "<sometablewithblob>", MDB_TABLE); mdb_read_columns(table); mdb_read_indices(table); mdb_rewind_table(table); int blobLen = MDB_BIND_SIZE; gchar blobChunk[MDB_BIND_SIZE]; int colID = mdb_bind_column_by_name(table, "<blobcolumn>", blobChunk, &blobLen); MdbColumn * blobCol = g_ptr_array_index(table->columns, colID - 1); if (mdb_fetch_row(table)) { gchar binder[MDB_MEMO_OVERHEAD]; memcpy(binder, blobChunk, MDB_MEMO_OVERHEAD); int rlen; if (rlen = mdb_ole_read(mdb, blobCol, binder, MDB_BIND_SIZE)) { char* mem = malloc(rlen); int pos = 0; memcpy(&mem[pos], blobChunk, rlen); pos += rlen; while (rlen = mdb_ole_read_next(mdb, blobCol, binder)) { mem = realloc(mem, pos + rlen); memcpy(&mem[pos], blobChunk, rlen); pos += rlen; } // Use the content of mem here. // The length of the data is located // in the pos variable at this point. free(mem); } } > From: Shanon Mulley <sha...@gm...> > Reply-To: Shanon Mulley <sha...@gm...> > Date: Tue, 4 Oct 2005 10:28:04 +1000 > To: Andreas Richter <mdb...@ri...> > Cc: "Brian A. Seklecki" <lav...@sp...>, Sam Moffatt > <pa...@gm...>, <mdb...@li...> > Subject: Re: [mdb-dev] Success - kind of... > > So, is this to say that none of the standard mdbtools does this (like > mdb-export, or mdb-sql) - I have to play around with the source? > > I'm looking forward to seeing your snippet of code, although I'm > thinking it might break my brain getting it to work. I'm a recent > convert to linux, and so far have avoided playing around with source > code, but I'm willing to give it a go if need be. > > On 10/3/05, Andreas Richter <mdb...@ri...> wrote: >> I can find some of the code, but don't have access from it at my current >> location... >> Basically mdb_bind_column to a buffer of size MDB_BIND_SIZE >> Then after fetching the row. Copy this data into a size MDB_MEMO_OVERHEAD >> and use the new buffer in the call to mdb_read_ole. This call will then copy >> the data into the original bound buffer and return the size for you. Copy >> that data into yet another buffer. Then call mdb_read_ole_next to see if >> there is more data. If there is it will again be copied to the bound buffer. >> You can append it to the data you already stored away. Then call >> mdb_read_ole_next again until it returns 0. I'll paste some code here when I >> get home tonight. >> There is one little snippet of code in the sources that also uses >> mdb_read_ole. That's where I figured out how to use this. Just grep for >> mdb_read_ole in all the sources. >> Thanks, >> Andy >> ----- Original Message ----- >> From: "Shanon Mulley" <sha...@gm...> >> To: "Brian A. Seklecki" <lav...@sp...> >> Cc: "Andreas Richter" <mdb...@ri...>; "Sam Moffatt" >> <pa...@gm...>; <mdb...@li...> >> Sent: Monday, October 03, 2005 8:23 AM >> Subject: Re: [mdb-dev] Success - kind of... >> >> >> Andreas, >> >> You seem to be having more success with reading binary data from >> access then myself. I'm wondering if you can give me a few pointers >> with getting binary data out. I've been teaching myself how to use >> mdbtools, but am yet to work this out. You mention a command >> "mdb_read_ole". Can you tell me where this is used? >> >> I'm using the latest version (from cvs), but didnt enable odbc - that >> wouldnt make properly. >> >> Thanks. >> >> On 10/3/05, Brian A. Seklecki <lav...@sp...> wrote: >>> Specifically which versions of the autotools did you use, just out of >>> curiosity? >>> >>> ~BAS >>> >>> On Fri, 2005-09-30 at 20:59, Andreas Richter wrote: >>>> Hi, >>>> After beating autoconf, libtool, autogen and so on into pulp I have the >>>> CVS >>>> version running. It can correctly read the blobs with mdb_read_ole!!! >>>> Yes. >>>> However blobs that are longer than will cause an segment fault in >>>> mdb_read_ole_next. The problem seems to be that the length calculation >>>> is >>>> wrong. The following code in mdb_find_pg_row comes back with a len of >>>> 0xfffffb9b i.e. A negative number. >>>> >>>> mdb_swap_pgbuf(mdb); >>>> mdb_find_row(mdb, row, off, len); >>>> mdb_swap_pgbuf(mdb); >>>> >>>> In mdb_find_row the next_start is 4096 even though I am not loading the >>>> blob >>>> from the first row. By testing thing I think I figured out that >>>> sometimes >>>> blobs come with extensions, but the pg_row for the extension (i.e. The >>>> next >>>> pointer) is zero. Therefore making the following change in >>>> mdb_read_ole_next >>>> seemed to fix the crash (line 460) >>>> >>>> if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, >>>> &buf, &row_start, &len)) { >>>> return 0; >>>> } >>>> >>>> To >>>> >>>> if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, >>>> col->cur_blob_pg_row, >>>> &buf, &row_start, &len)) { >>>> return 0; >>>> } >>>> >>>> BTW: The CVS version of mdbtools.h has an #include <config.h> which >>>> probably >>>> won't work from a normal include directory. >>>> Thanks, >>>> Andy >>>> >>>>> From: Andreas Richter <mdb...@ri...> >>>>> Date: Fri, 30 Sep 2005 11:24:31 -0400 >>>>> To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' >>>>> <mdb...@ri...> >>>>> Cc: <mdb...@li...> >>>>> Subject: RE: [mdb-dev] Problem reading blobs >>>>> >>>>> No success building CVS dump under Darwin. The 0.6pre tar file did >>>>> work >>>>> correctly under a fink installation, but the autogen.sh file for the >>>>> CVS >>>>> dump did not work. Has anyone done this yet? I'll probably just >>>>> execute the >>>>> autogen.sh under linux and then move the files to Darwin. Any >>>>> suggestions >>>>> would be great. >>>>> Thanks, >>>>> Andy >>>>> >>>>> -----Original Message----- >>>>> From: Sam Moffatt [mailto:pa...@gm...] >>>>> Sent: Thursday, September 29, 2005 8:17 AM >>>>> To: Andreas Richter >>>>> Cc: mdb...@li... >>>>> Subject: Re: [mdb-dev] Problem reading blobs >>>>> >>>>> what version of the library and as others will say, the cvs version >>>>> has heaps of improvements in it. >>>>> >>>>> On 9/29/05, Andreas Richter <mdb...@ri...> wrote: >>>>>> >>>>>> >>>>>> Hi, >>>>>> >>>>>> I am trying to read an access 2k database with blobs. After reading >>>>>> the >>>>> blob >>>>>> with mdb_read_ole it will give me a 4k of data. Calling >>>>>> mdb_read_ole_next >>>>>> will always tell me that there is no more data. Is mdb_read_ole >>>>>> currently >>>>>> implemented for normal blobs? Can I help implementing it? I am using >>>>>> the >>>>>> library under Mac OS X. >>>>>> >>>>>> Thanks >>>>>> >>>>>> Andy >>>>>> >>>>> >>>> >>>> >>>> >>>> >>>> ------------------------------------------------------- >>>> This SF.Net email is sponsored by: >>>> Power Architecture Resource Center: Free content, downloads, >>>> discussions, >>>> and more. http://solutions.newsforge.com/ibmarch.tmpl >>>> _______________________________________________ >>>> mdbtools-dev mailing list >>>> mdb...@li... >>>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >>> >>> >>> >>> ------------------------------------------------------- >>> This SF.Net email is sponsored by: >>> Power Architecture Resource Center: Free content, downloads, discussions, >>> and more. http://solutions.newsforge.com/ibmarch.tmpl >>> _______________________________________________ >>> mdbtools-dev mailing list >>> mdb...@li... >>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >>> >> >> >> >> ------------------------------------------------------- >> This SF.Net email is sponsored by: >> Power Architecture Resource Center: Free content, downloads, discussions, >> and more. http://solutions.newsforge.com/ibmarch.tmpl >> _______________________________________________ >> mdbtools-dev mailing list >> mdb...@li... >> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >> |
From: Shanon M. <sha...@gm...> - 2005-10-13 22:53:30
|
Andreas, Thanks for that snippet. I did a grep for mdb_read_ole in my source (Well, the whole mdbtools directory), and found nothing. This is with the 0.6pre (from the cvs or csv or whatever). And with this snippet you have given me, I'm assuming this is used before compiling mdbtools, to modify the source code. I'm not really that experienced with tooling around with source code like this (I work with a windows 4GL package). Can you give me a hint of what I'm changing? Thanks. On 10/4/05, Andreas Richter <an...@ri...> wrote: > Here is the promised snippet: > > MdbHandle* mdb =3D mdb_open("<somepath>", MDB_TABLE); > mdb_read_catalog(mdb, MDB_TABLE); > MdbTableDef *table =3D mdb_read_table_by_name(mdb, "<sometablewithblo= b>", > MDB_TABLE); > mdb_read_columns(table); > mdb_read_indices(table); > mdb_rewind_table(table); > > int blobLen =3D MDB_BIND_SIZE; > gchar blobChunk[MDB_BIND_SIZE]; > int colID =3D mdb_bind_column_by_name(table, "<blobcolumn>", blobChun= k, > &blobLen); > MdbColumn * blobCol =3D g_ptr_array_index(table->columns, colID - 1); > if (mdb_fetch_row(table)) > { > gchar binder[MDB_MEMO_OVERHEAD]; > memcpy(binder, blobChunk, MDB_MEMO_OVERHEAD); > int rlen; > if (rlen =3D mdb_ole_read(mdb, blobCol, binder, MDB_BIND_SIZE)) > { > char* mem =3D malloc(rlen); > int pos =3D 0; > memcpy(&mem[pos], blobChunk, rlen); > pos +=3D rlen; > > while (rlen =3D mdb_ole_read_next(mdb, blobCol, binder)) > { > mem =3D realloc(mem, pos + rlen); > memcpy(&mem[pos], blobChunk, rlen); > pos +=3D rlen; > } > // Use the content of mem here. > // The length of the data is located > // in the pos variable at this point. > free(mem); > } > } > > > > > From: Shanon Mulley <sha...@gm...> > > Reply-To: Shanon Mulley <sha...@gm...> > > Date: Tue, 4 Oct 2005 10:28:04 +1000 > > To: Andreas Richter <mdb...@ri...> > > Cc: "Brian A. Seklecki" <lav...@sp...>, Sam Moffatt > > <pa...@gm...>, <mdb...@li...> > > Subject: Re: [mdb-dev] Success - kind of... > > > > So, is this to say that none of the standard mdbtools does this (like > > mdb-export, or mdb-sql) - I have to play around with the source? > > > > I'm looking forward to seeing your snippet of code, although I'm > > thinking it might break my brain getting it to work. I'm a recent > > convert to linux, and so far have avoided playing around with source > > code, but I'm willing to give it a go if need be. > > > > On 10/3/05, Andreas Richter <mdb...@ri...> wrote: > >> I can find some of the code, but don't have access from it at my curre= nt > >> location... > >> Basically mdb_bind_column to a buffer of size MDB_BIND_SIZE > >> Then after fetching the row. Copy this data into a size MDB_MEMO_OVERH= EAD > >> and use the new buffer in the call to mdb_read_ole. This call will the= n copy > >> the data into the original bound buffer and return the size for you. C= opy > >> that data into yet another buffer. Then call mdb_read_ole_next to see = if > >> there is more data. If there is it will again be copied to the bound b= uffer. > >> You can append it to the data you already stored away. Then call > >> mdb_read_ole_next again until it returns 0. I'll paste some code here = when I > >> get home tonight. > >> There is one little snippet of code in the sources that also uses > >> mdb_read_ole. That's where I figured out how to use this. Just grep fo= r > >> mdb_read_ole in all the sources. > >> Thanks, > >> Andy > >> ----- Original Message ----- > >> From: "Shanon Mulley" <sha...@gm...> > >> To: "Brian A. Seklecki" <lav...@sp...> > >> Cc: "Andreas Richter" <mdb...@ri...>; "Sam Moffatt" > >> <pa...@gm...>; <mdb...@li...> > >> Sent: Monday, October 03, 2005 8:23 AM > >> Subject: Re: [mdb-dev] Success - kind of... > >> > >> > >> Andreas, > >> > >> You seem to be having more success with reading binary data from > >> access then myself. I'm wondering if you can give me a few pointers > >> with getting binary data out. I've been teaching myself how to use > >> mdbtools, but am yet to work this out. You mention a command > >> "mdb_read_ole". Can you tell me where this is used? > >> > >> I'm using the latest version (from cvs), but didnt enable odbc - that > >> wouldnt make properly. > >> > >> Thanks. > >> > >> On 10/3/05, Brian A. Seklecki <lav...@sp...> wrote: > >>> Specifically which versions of the autotools did you use, just out of > >>> curiosity? > >>> > >>> ~BAS > >>> > >>> On Fri, 2005-09-30 at 20:59, Andreas Richter wrote: > >>>> Hi, > >>>> After beating autoconf, libtool, autogen and so on into pulp I have = the > >>>> CVS > >>>> version running. It can correctly read the blobs with mdb_read_ole!!= ! > >>>> Yes. > >>>> However blobs that are longer than will cause an segment fault in > >>>> mdb_read_ole_next. The problem seems to be that the length calculati= on > >>>> is > >>>> wrong. The following code in mdb_find_pg_row comes back with a len o= f > >>>> 0xfffffb9b i.e. A negative number. > >>>> > >>>> mdb_swap_pgbuf(mdb); > >>>> mdb_find_row(mdb, row, off, len); > >>>> mdb_swap_pgbuf(mdb); > >>>> > >>>> In mdb_find_row the next_start is 4096 even though I am not loading = the > >>>> blob > >>>> from the first row. By testing thing I think I figured out that > >>>> sometimes > >>>> blobs come with extensions, but the pg_row for the extension (i.e. T= he > >>>> next > >>>> pointer) is zero. Therefore making the following change in > >>>> mdb_read_ole_next > >>>> seemed to fix the crash (line 460) > >>>> > >>>> if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, > >>>> &buf, &row_start, &len)) { > >>>> return 0; > >>>> } > >>>> > >>>> To > >>>> > >>>> if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, > >>>> col->cur_blob_pg_row, > >>>> &buf, &row_start, &len)) { > >>>> return 0; > >>>> } > >>>> > >>>> BTW: The CVS version of mdbtools.h has an #include <config.h> which > >>>> probably > >>>> won't work from a normal include directory. > >>>> Thanks, > >>>> Andy > >>>> > >>>>> From: Andreas Richter <mdb...@ri...> > >>>>> Date: Fri, 30 Sep 2005 11:24:31 -0400 > >>>>> To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' > >>>>> <mdb...@ri...> > >>>>> Cc: <mdb...@li...> > >>>>> Subject: RE: [mdb-dev] Problem reading blobs > >>>>> > >>>>> No success building CVS dump under Darwin. The 0.6pre tar file did > >>>>> work > >>>>> correctly under a fink installation, but the autogen.sh file for th= e > >>>>> CVS > >>>>> dump did not work. Has anyone done this yet? I'll probably just > >>>>> execute the > >>>>> autogen.sh under linux and then move the files to Darwin. Any > >>>>> suggestions > >>>>> would be great. > >>>>> Thanks, > >>>>> Andy > >>>>> > >>>>> -----Original Message----- > >>>>> From: Sam Moffatt [mailto:pa...@gm...] > >>>>> Sent: Thursday, September 29, 2005 8:17 AM > >>>>> To: Andreas Richter > >>>>> Cc: mdb...@li... > >>>>> Subject: Re: [mdb-dev] Problem reading blobs > >>>>> > >>>>> what version of the library and as others will say, the cvs version > >>>>> has heaps of improvements in it. > >>>>> > >>>>> On 9/29/05, Andreas Richter <mdb...@ri...> wrote: > >>>>>> > >>>>>> > >>>>>> Hi, > >>>>>> > >>>>>> I am trying to read an access 2k database with blobs. After readin= g > >>>>>> the > >>>>> blob > >>>>>> with mdb_read_ole it will give me a 4k of data. Calling > >>>>>> mdb_read_ole_next > >>>>>> will always tell me that there is no more data. Is mdb_read_ole > >>>>>> currently > >>>>>> implemented for normal blobs? Can I help implementing it? I am usi= ng > >>>>>> the > >>>>>> library under Mac OS X. > >>>>>> > >>>>>> Thanks > >>>>>> > >>>>>> Andy > >>>>>> > >>>>> > >>>> > >>>> > >>>> > >>>> > >>>> ------------------------------------------------------- > >>>> This SF.Net email is sponsored by: > >>>> Power Architecture Resource Center: Free content, downloads, > >>>> discussions, > >>>> and more. http://solutions.newsforge.com/ibmarch.tmpl > >>>> _______________________________________________ > >>>> mdbtools-dev mailing list > >>>> mdb...@li... > >>>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > >>> > >>> > >>> > >>> ------------------------------------------------------- > >>> This SF.Net email is sponsored by: > >>> Power Architecture Resource Center: Free content, downloads, discussi= ons, > >>> and more. http://solutions.newsforge.com/ibmarch.tmpl > >>> _______________________________________________ > >>> mdbtools-dev mailing list > >>> mdb...@li... > >>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > >>> > >> > >> > >> > >> ------------------------------------------------------- > >> This SF.Net email is sponsored by: > >> Power Architecture Resource Center: Free content, downloads, discussio= ns, > >> and more. http://solutions.newsforge.com/ibmarch.tmpl > >> _______________________________________________ > >> mdbtools-dev mailing list > >> mdb...@li... > >> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev > >> > > > |
From: Andreas R. <mdb...@ri...> - 2005-10-14 00:30:12
|
Actually the code I sent is code using the library. Not the applications, but the library. You can just write some code and use the library. If you were planning to incorporate this into the export command then there is work to be done. Thanks, Andy > From: Shanon Mulley <sha...@gm...> > Date: Fri, 14 Oct 2005 08:53:28 +1000 > To: Andreas Richter <an...@ri...> > Cc: Andreas Richter <mdb...@ri...>, "Brian A. Seklecki" > <lav...@sp...>, Sam Moffatt <pa...@gm...>, > <mdb...@li...> > Subject: Re: [mdb-dev] Success - kind of... > > Andreas, > > Thanks for that snippet. > > I did a grep for mdb_read_ole in my source (Well, the whole mdbtools > directory), and found nothing. This is with the 0.6pre (from the cvs > or csv or whatever). > > And with this snippet you have given me, I'm assuming this is used > before compiling mdbtools, to modify the source code. I'm not really > that experienced with tooling around with source code like this (I > work with a windows 4GL package). Can you give me a hint of what I'm > changing? > > Thanks. > > On 10/4/05, Andreas Richter <an...@ri...> wrote: >> Here is the promised snippet: >> >> MdbHandle* mdb = mdb_open("<somepath>", MDB_TABLE); >> mdb_read_catalog(mdb, MDB_TABLE); >> MdbTableDef *table = mdb_read_table_by_name(mdb, "<sometablewithblob>", >> MDB_TABLE); >> mdb_read_columns(table); >> mdb_read_indices(table); >> mdb_rewind_table(table); >> >> int blobLen = MDB_BIND_SIZE; >> gchar blobChunk[MDB_BIND_SIZE]; >> int colID = mdb_bind_column_by_name(table, "<blobcolumn>", blobChunk, >> &blobLen); >> MdbColumn * blobCol = g_ptr_array_index(table->columns, colID - 1); >> if (mdb_fetch_row(table)) >> { >> gchar binder[MDB_MEMO_OVERHEAD]; >> memcpy(binder, blobChunk, MDB_MEMO_OVERHEAD); >> int rlen; >> if (rlen = mdb_ole_read(mdb, blobCol, binder, MDB_BIND_SIZE)) >> { >> char* mem = malloc(rlen); >> int pos = 0; >> memcpy(&mem[pos], blobChunk, rlen); >> pos += rlen; >> >> while (rlen = mdb_ole_read_next(mdb, blobCol, binder)) >> { >> mem = realloc(mem, pos + rlen); >> memcpy(&mem[pos], blobChunk, rlen); >> pos += rlen; >> } >> // Use the content of mem here. >> // The length of the data is located >> // in the pos variable at this point. >> free(mem); >> } >> } >> >> >> >>> From: Shanon Mulley <sha...@gm...> >>> Reply-To: Shanon Mulley <sha...@gm...> >>> Date: Tue, 4 Oct 2005 10:28:04 +1000 >>> To: Andreas Richter <mdb...@ri...> >>> Cc: "Brian A. Seklecki" <lav...@sp...>, Sam Moffatt >>> <pa...@gm...>, <mdb...@li...> >>> Subject: Re: [mdb-dev] Success - kind of... >>> >>> So, is this to say that none of the standard mdbtools does this (like >>> mdb-export, or mdb-sql) - I have to play around with the source? >>> >>> I'm looking forward to seeing your snippet of code, although I'm >>> thinking it might break my brain getting it to work. I'm a recent >>> convert to linux, and so far have avoided playing around with source >>> code, but I'm willing to give it a go if need be. >>> >>> On 10/3/05, Andreas Richter <mdb...@ri...> wrote: >>>> I can find some of the code, but don't have access from it at my current >>>> location... >>>> Basically mdb_bind_column to a buffer of size MDB_BIND_SIZE >>>> Then after fetching the row. Copy this data into a size MDB_MEMO_OVERHEAD >>>> and use the new buffer in the call to mdb_read_ole. This call will then >>>> copy >>>> the data into the original bound buffer and return the size for you. Copy >>>> that data into yet another buffer. Then call mdb_read_ole_next to see if >>>> there is more data. If there is it will again be copied to the bound >>>> buffer. >>>> You can append it to the data you already stored away. Then call >>>> mdb_read_ole_next again until it returns 0. I'll paste some code here when >>>> I >>>> get home tonight. >>>> There is one little snippet of code in the sources that also uses >>>> mdb_read_ole. That's where I figured out how to use this. Just grep for >>>> mdb_read_ole in all the sources. >>>> Thanks, >>>> Andy >>>> ----- Original Message ----- >>>> From: "Shanon Mulley" <sha...@gm...> >>>> To: "Brian A. Seklecki" <lav...@sp...> >>>> Cc: "Andreas Richter" <mdb...@ri...>; "Sam Moffatt" >>>> <pa...@gm...>; <mdb...@li...> >>>> Sent: Monday, October 03, 2005 8:23 AM >>>> Subject: Re: [mdb-dev] Success - kind of... >>>> >>>> >>>> Andreas, >>>> >>>> You seem to be having more success with reading binary data from >>>> access then myself. I'm wondering if you can give me a few pointers >>>> with getting binary data out. I've been teaching myself how to use >>>> mdbtools, but am yet to work this out. You mention a command >>>> "mdb_read_ole". Can you tell me where this is used? >>>> >>>> I'm using the latest version (from cvs), but didnt enable odbc - that >>>> wouldnt make properly. >>>> >>>> Thanks. >>>> >>>> On 10/3/05, Brian A. Seklecki <lav...@sp...> wrote: >>>>> Specifically which versions of the autotools did you use, just out of >>>>> curiosity? >>>>> >>>>> ~BAS >>>>> >>>>> On Fri, 2005-09-30 at 20:59, Andreas Richter wrote: >>>>>> Hi, >>>>>> After beating autoconf, libtool, autogen and so on into pulp I have the >>>>>> CVS >>>>>> version running. It can correctly read the blobs with mdb_read_ole!!! >>>>>> Yes. >>>>>> However blobs that are longer than will cause an segment fault in >>>>>> mdb_read_ole_next. The problem seems to be that the length calculation >>>>>> is >>>>>> wrong. The following code in mdb_find_pg_row comes back with a len of >>>>>> 0xfffffb9b i.e. A negative number. >>>>>> >>>>>> mdb_swap_pgbuf(mdb); >>>>>> mdb_find_row(mdb, row, off, len); >>>>>> mdb_swap_pgbuf(mdb); >>>>>> >>>>>> In mdb_find_row the next_start is 4096 even though I am not loading the >>>>>> blob >>>>>> from the first row. By testing thing I think I figured out that >>>>>> sometimes >>>>>> blobs come with extensions, but the pg_row for the extension (i.e. The >>>>>> next >>>>>> pointer) is zero. Therefore making the following change in >>>>>> mdb_read_ole_next >>>>>> seemed to fix the crash (line 460) >>>>>> >>>>>> if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, >>>>>> &buf, &row_start, &len)) { >>>>>> return 0; >>>>>> } >>>>>> >>>>>> To >>>>>> >>>>>> if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, >>>>>> col->cur_blob_pg_row, >>>>>> &buf, &row_start, &len)) { >>>>>> return 0; >>>>>> } >>>>>> >>>>>> BTW: The CVS version of mdbtools.h has an #include <config.h> which >>>>>> probably >>>>>> won't work from a normal include directory. >>>>>> Thanks, >>>>>> Andy >>>>>> >>>>>>> From: Andreas Richter <mdb...@ri...> >>>>>>> Date: Fri, 30 Sep 2005 11:24:31 -0400 >>>>>>> To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' >>>>>>> <mdb...@ri...> >>>>>>> Cc: <mdb...@li...> >>>>>>> Subject: RE: [mdb-dev] Problem reading blobs >>>>>>> >>>>>>> No success building CVS dump under Darwin. The 0.6pre tar file did >>>>>>> work >>>>>>> correctly under a fink installation, but the autogen.sh file for the >>>>>>> CVS >>>>>>> dump did not work. Has anyone done this yet? I'll probably just >>>>>>> execute the >>>>>>> autogen.sh under linux and then move the files to Darwin. Any >>>>>>> suggestions >>>>>>> would be great. >>>>>>> Thanks, >>>>>>> Andy >>>>>>> >>>>>>> -----Original Message----- >>>>>>> From: Sam Moffatt [mailto:pa...@gm...] >>>>>>> Sent: Thursday, September 29, 2005 8:17 AM >>>>>>> To: Andreas Richter >>>>>>> Cc: mdb...@li... >>>>>>> Subject: Re: [mdb-dev] Problem reading blobs >>>>>>> >>>>>>> what version of the library and as others will say, the cvs version >>>>>>> has heaps of improvements in it. >>>>>>> >>>>>>> On 9/29/05, Andreas Richter <mdb...@ri...> wrote: >>>>>>>> >>>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> I am trying to read an access 2k database with blobs. After reading >>>>>>>> the >>>>>>> blob >>>>>>>> with mdb_read_ole it will give me a 4k of data. Calling >>>>>>>> mdb_read_ole_next >>>>>>>> will always tell me that there is no more data. Is mdb_read_ole >>>>>>>> currently >>>>>>>> implemented for normal blobs? Can I help implementing it? I am using >>>>>>>> the >>>>>>>> library under Mac OS X. >>>>>>>> >>>>>>>> Thanks >>>>>>>> >>>>>>>> Andy >>>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> ------------------------------------------------------- >>>>>> This SF.Net email is sponsored by: >>>>>> Power Architecture Resource Center: Free content, downloads, >>>>>> discussions, >>>>>> and more. http://solutions.newsforge.com/ibmarch.tmpl >>>>>> _______________________________________________ >>>>>> mdbtools-dev mailing list >>>>>> mdb...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >>>>> >>>>> >>>>> >>>>> ------------------------------------------------------- >>>>> This SF.Net email is sponsored by: >>>>> Power Architecture Resource Center: Free content, downloads, discussions, >>>>> and more. http://solutions.newsforge.com/ibmarch.tmpl >>>>> _______________________________________________ >>>>> mdbtools-dev mailing list >>>>> mdb...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >>>>> >>>> >>>> >>>> >>>> ------------------------------------------------------- >>>> This SF.Net email is sponsored by: >>>> Power Architecture Resource Center: Free content, downloads, discussions, >>>> and more. http://solutions.newsforge.com/ibmarch.tmpl >>>> _______________________________________________ >>>> mdbtools-dev mailing list >>>> mdb...@li... >>>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >>>> >> >> >> |