Thread: [cx-oracle-users] BLOBs > 2GB
Brought to you by:
atuining
From: VJ K. <vj...@gm...> - 2010-05-10 15:20:25
|
It seems that writing BLObs > 2GB is not possible. The code below: import cx_Oracle from struct import * from ctypes import create_string_buffer,sizeof time_key=201003110000 l=1 cdm_conn = cx_Oracle.connect('...') cdm_cur = cdm_conn.cursor() buf = create_string_buffer(2684770500) #2684770500 sql=""" insert into TMP( l, dimension_name_1, dimension_name_2, time_key, blob ) values( """+str(l)+""", 'a', 'b', """+str(time_key)+""", empty_blob() ) returning blob into :1""" blob_var = cdm_cur.var(cx_Oracle.BLOB) cdm_cur.execute(sql, (blob_var,) ) # blob_var.getvalue().write(buf.raw) --------- causes this error: cx_Oracle.DatabaseError: ORA-22993: specified input amount is greater than actual source amount If the buffer length < 2GB, it runs correctly. VJ |
From: Anthony T. <ant...@gm...> - 2010-05-10 15:31:26
|
Hmm, just a question: why would you want to write more than 2 GB at a time? Why not write it in chunks smaller than 2 GB? Can you see if that workaround works for you? I'm not really in a position (limited machines) to test such a large amount of data at one time -- so can't really say if the limitation is in the cx_Oracle code, the Oracle code or your code. Anthony On Mon, May 10, 2010 at 9:14 AM, VJ Kumar <vj...@gm...> wrote: > It seems that writing BLObs > 2GB is not possible. The code below: > > import cx_Oracle > from struct import * > from ctypes import create_string_buffer,sizeof > > time_key=201003110000 > l=1 > > cdm_conn = cx_Oracle.connect('...') > cdm_cur = cdm_conn.cursor() > > buf = create_string_buffer(2684770500) #2684770500 > sql=""" > insert into TMP( > l, > dimension_name_1, > dimension_name_2, > time_key, > blob > ) > values( > """+str(l)+""", > 'a', > 'b', > """+str(time_key)+""", > empty_blob() > ) > returning blob into :1""" > > blob_var = cdm_cur.var(cx_Oracle.BLOB) > cdm_cur.execute(sql, (blob_var,) ) > # > blob_var.getvalue().write(buf.raw) > > --------- > > causes this error: > > cx_Oracle.DatabaseError: ORA-22993: specified input amount is greater than > actual source amount > > If the buffer length < 2GB, it runs correctly. > > VJ > > > ------------------------------------------------------------------------------ > > _______________________________________________ > cx-oracle-users mailing list > cx-...@li... > https://lists.sourceforge.net/lists/listinfo/cx-oracle-users > |
From: VJ K. <vj...@gm...> - 2010-05-11 20:19:03
|
Anthony Tuininga <anthony.tuininga@...> writes: > > Hmm, just a question: why would you want to write more than 2 GB at a > time? Why not write it in chunks smaller than 2 GB? I would be happy to, but the write call does not have the offset parameter as in: write(buffer, offset, length). Or does it ? Thanks. VJ |
From: Anthony T. <ant...@gm...> - 2010-05-11 20:35:35
|
On Tue, May 11, 2010 at 2:18 PM, VJ Kumar <vj...@gm...> wrote: > Anthony Tuininga <anthony.tuininga@...> writes: > >> >> Hmm, just a question: why would you want to write more than 2 GB at a >> time? Why not write it in chunks smaller than 2 GB? > > I would be happy to, but the write call does not have the offset > parameter as in: > > write(buffer, offset, length). Or does it ? It does indeed. Its even in the documentation.... :-) write(data, offset = 1) is the signature. Anthony |
From: VJ K. <vj...@gm...> - 2010-05-11 21:37:07
|
Anthony Tuininga <anthony.tuininga@...> writes: > > On Tue, May 11, 2010 at 2:18 PM, VJ Kumar <vjkmail@...> wrote: > > Anthony Tuininga <anthony.tuininga@...> writes: > > > >> > >> Hmm, just a question: why would you want to write more than 2 GB at a > >> time? Why not write it in chunks smaller than 2 GB? > > > > I would be happy to, but the write call does not have the offset > > parameter as in: > > > > write(buffer, offset, length). Or does it ? > > It does indeed. Its even in the documentation.... > > write(data, offset = 1) > Right you are ! I did see the parameter, but I wrote my previous remark incorrectly: what I meant was that there is no 'length' parameter (rather than no 'offset') that would allow writing chunks without copying to a smaller buffer. By the way, OCI now has extended writes(OCILobWrite2) with 64 bit offsets and lengths. VJ |
From: Anthony T. <ant...@gm...> - 2010-05-12 13:41:09
|
On Tue, May 11, 2010 at 3:36 PM, VJ Kumar <vj...@gm...> wrote: > Anthony Tuininga <anthony.tuininga@...> writes: > >> >> On Tue, May 11, 2010 at 2:18 PM, VJ Kumar <vjkmail@...> wrote: >> > Anthony Tuininga <anthony.tuininga@...> writes: >> > >> >> >> >> Hmm, just a question: why would you want to write more than 2 GB at a >> >> time? Why not write it in chunks smaller than 2 GB? >> > >> > I would be happy to, but the write call does not have the offset >> > parameter as in: >> > >> > write(buffer, offset, length). Or does it ? >> >> It does indeed. Its even in the documentation.... >> >> write(data, offset = 1) >> > > Right you are ! I did see the parameter, but I wrote my previous remark > incorrectly: what I meant was that there is no 'length' parameter (rather than > no 'offset') that would allow writing chunks without copying to a smaller buffer. Right. Well, if you already have the data in memory there should be no need to split it like that. Then again, how did it get there in the first place? Perhaps it can be chunked from the original source? > By the way, OCI now has extended writes(OCILobWrite2) with 64 bit offsets and > lengths. Nice. I'll have to take a look at that and see if that will resolve the problem for you. If I made the use of OCILobWrite2 possible, would you be able to do some testing for me? I can't see that I am going to be able to write larger than 2 GB data particularly easily at this time. :-) Anthony |
From: VJ K. <vj...@gm...> - 2010-05-12 14:04:28
|
Anthony Tuininga <anthony.tuininga@...> writes: > > need to split it like that. Then again, how did it get there in the > first place? Perhaps it can be chunked from the original source? Yes, it can be done. > If I made the use of OCILobWrite2 possible, would > you be able to do some testing for me? I can't see that I am going to > be able to write larger than 2 GB data particularly easily at this > time. I can test, no problem there. > |