From: Andreas R. <mdb...@ri...> - 2005-10-04 01:30:35
|
Here is the promised snippet: MdbHandle* mdb = mdb_open("<somepath>", MDB_TABLE); mdb_read_catalog(mdb, MDB_TABLE); MdbTableDef *table = mdb_read_table_by_name(mdb, "<sometablewithblob>", MDB_TABLE); mdb_read_columns(table); mdb_read_indices(table); mdb_rewind_table(table); int blobLen = MDB_BIND_SIZE; gchar blobChunk[MDB_BIND_SIZE]; int colID = mdb_bind_column_by_name(table, "<blobcolumn>", blobChunk, &blobLen); MdbColumn * blobCol = g_ptr_array_index(table->columns, colID - 1); if (mdb_fetch_row(table)) { gchar binder[MDB_MEMO_OVERHEAD]; memcpy(binder, blobChunk, MDB_MEMO_OVERHEAD); int rlen; if (rlen = mdb_ole_read(mdb, blobCol, binder, MDB_BIND_SIZE)) { char* mem = malloc(rlen); int pos = 0; memcpy(&mem[pos], blobChunk, rlen); pos += rlen; while (rlen = mdb_ole_read_next(mdb, blobCol, binder)) { mem = realloc(mem, pos + rlen); memcpy(&mem[pos], blobChunk, rlen); pos += rlen; } // Use the content of mem here. // The length of the data is located // in the pos variable at this point. free(mem); } } > From: Shanon Mulley <sha...@gm...> > Reply-To: Shanon Mulley <sha...@gm...> > Date: Tue, 4 Oct 2005 10:28:04 +1000 > To: Andreas Richter <mdb...@ri...> > Cc: "Brian A. Seklecki" <lav...@sp...>, Sam Moffatt > <pa...@gm...>, <mdb...@li...> > Subject: Re: [mdb-dev] Success - kind of... > > So, is this to say that none of the standard mdbtools does this (like > mdb-export, or mdb-sql) - I have to play around with the source? > > I'm looking forward to seeing your snippet of code, although I'm > thinking it might break my brain getting it to work. I'm a recent > convert to linux, and so far have avoided playing around with source > code, but I'm willing to give it a go if need be. > > On 10/3/05, Andreas Richter <mdb...@ri...> wrote: >> I can find some of the code, but don't have access from it at my current >> location... >> Basically mdb_bind_column to a buffer of size MDB_BIND_SIZE >> Then after fetching the row. Copy this data into a size MDB_MEMO_OVERHEAD >> and use the new buffer in the call to mdb_read_ole. This call will then copy >> the data into the original bound buffer and return the size for you. Copy >> that data into yet another buffer. Then call mdb_read_ole_next to see if >> there is more data. If there is it will again be copied to the bound buffer. >> You can append it to the data you already stored away. Then call >> mdb_read_ole_next again until it returns 0. I'll paste some code here when I >> get home tonight. >> There is one little snippet of code in the sources that also uses >> mdb_read_ole. That's where I figured out how to use this. Just grep for >> mdb_read_ole in all the sources. >> Thanks, >> Andy >> ----- Original Message ----- >> From: "Shanon Mulley" <sha...@gm...> >> To: "Brian A. Seklecki" <lav...@sp...> >> Cc: "Andreas Richter" <mdb...@ri...>; "Sam Moffatt" >> <pa...@gm...>; <mdb...@li...> >> Sent: Monday, October 03, 2005 8:23 AM >> Subject: Re: [mdb-dev] Success - kind of... >> >> >> Andreas, >> >> You seem to be having more success with reading binary data from >> access then myself. I'm wondering if you can give me a few pointers >> with getting binary data out. I've been teaching myself how to use >> mdbtools, but am yet to work this out. You mention a command >> "mdb_read_ole". Can you tell me where this is used? >> >> I'm using the latest version (from cvs), but didnt enable odbc - that >> wouldnt make properly. >> >> Thanks. >> >> On 10/3/05, Brian A. Seklecki <lav...@sp...> wrote: >>> Specifically which versions of the autotools did you use, just out of >>> curiosity? >>> >>> ~BAS >>> >>> On Fri, 2005-09-30 at 20:59, Andreas Richter wrote: >>>> Hi, >>>> After beating autoconf, libtool, autogen and so on into pulp I have the >>>> CVS >>>> version running. It can correctly read the blobs with mdb_read_ole!!! >>>> Yes. >>>> However blobs that are longer than will cause an segment fault in >>>> mdb_read_ole_next. The problem seems to be that the length calculation >>>> is >>>> wrong. The following code in mdb_find_pg_row comes back with a len of >>>> 0xfffffb9b i.e. A negative number. >>>> >>>> mdb_swap_pgbuf(mdb); >>>> mdb_find_row(mdb, row, off, len); >>>> mdb_swap_pgbuf(mdb); >>>> >>>> In mdb_find_row the next_start is 4096 even though I am not loading the >>>> blob >>>> from the first row. By testing thing I think I figured out that >>>> sometimes >>>> blobs come with extensions, but the pg_row for the extension (i.e. The >>>> next >>>> pointer) is zero. Therefore making the following change in >>>> mdb_read_ole_next >>>> seemed to fix the crash (line 460) >>>> >>>> if (mdb_find_pg_row(mdb, col->cur_blob_pg_row, >>>> &buf, &row_start, &len)) { >>>> return 0; >>>> } >>>> >>>> To >>>> >>>> if (!col->cur_blob_pg_row || mdb_find_pg_row(mdb, >>>> col->cur_blob_pg_row, >>>> &buf, &row_start, &len)) { >>>> return 0; >>>> } >>>> >>>> BTW: The CVS version of mdbtools.h has an #include <config.h> which >>>> probably >>>> won't work from a normal include directory. >>>> Thanks, >>>> Andy >>>> >>>>> From: Andreas Richter <mdb...@ri...> >>>>> Date: Fri, 30 Sep 2005 11:24:31 -0400 >>>>> To: 'Sam Moffatt' <pa...@gm...>, 'Andreas Richter' >>>>> <mdb...@ri...> >>>>> Cc: <mdb...@li...> >>>>> Subject: RE: [mdb-dev] Problem reading blobs >>>>> >>>>> No success building CVS dump under Darwin. The 0.6pre tar file did >>>>> work >>>>> correctly under a fink installation, but the autogen.sh file for the >>>>> CVS >>>>> dump did not work. Has anyone done this yet? I'll probably just >>>>> execute the >>>>> autogen.sh under linux and then move the files to Darwin. Any >>>>> suggestions >>>>> would be great. >>>>> Thanks, >>>>> Andy >>>>> >>>>> -----Original Message----- >>>>> From: Sam Moffatt [mailto:pa...@gm...] >>>>> Sent: Thursday, September 29, 2005 8:17 AM >>>>> To: Andreas Richter >>>>> Cc: mdb...@li... >>>>> Subject: Re: [mdb-dev] Problem reading blobs >>>>> >>>>> what version of the library and as others will say, the cvs version >>>>> has heaps of improvements in it. >>>>> >>>>> On 9/29/05, Andreas Richter <mdb...@ri...> wrote: >>>>>> >>>>>> >>>>>> Hi, >>>>>> >>>>>> I am trying to read an access 2k database with blobs. After reading >>>>>> the >>>>> blob >>>>>> with mdb_read_ole it will give me a 4k of data. Calling >>>>>> mdb_read_ole_next >>>>>> will always tell me that there is no more data. Is mdb_read_ole >>>>>> currently >>>>>> implemented for normal blobs? Can I help implementing it? I am using >>>>>> the >>>>>> library under Mac OS X. >>>>>> >>>>>> Thanks >>>>>> >>>>>> Andy >>>>>> >>>>> >>>> >>>> >>>> >>>> >>>> ------------------------------------------------------- >>>> This SF.Net email is sponsored by: >>>> Power Architecture Resource Center: Free content, downloads, >>>> discussions, >>>> and more. http://solutions.newsforge.com/ibmarch.tmpl >>>> _______________________________________________ >>>> mdbtools-dev mailing list >>>> mdb...@li... >>>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >>> >>> >>> >>> ------------------------------------------------------- >>> This SF.Net email is sponsored by: >>> Power Architecture Resource Center: Free content, downloads, discussions, >>> and more. http://solutions.newsforge.com/ibmarch.tmpl >>> _______________________________________________ >>> mdbtools-dev mailing list >>> mdb...@li... >>> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >>> >> >> >> >> ------------------------------------------------------- >> This SF.Net email is sponsored by: >> Power Architecture Resource Center: Free content, downloads, discussions, >> and more. http://solutions.newsforge.com/ibmarch.tmpl >> _______________________________________________ >> mdbtools-dev mailing list >> mdb...@li... >> https://lists.sourceforge.net/lists/listinfo/mdbtools-dev >> |