Thread: [cx-oracle-users] Cursor_InternalExecute invoking did not return
Brought to you by:
atuining
From: <jos...@si...> - 2017-03-02 08:26:27
|
Hi folks, Need you help. I wrote a python script to create several tables which select huge data from other tables in different python threads. I ran the script several times and it always turn out to unable to complete. So i ran gdb to check what the script was doing. The function stack show the Cursor_InternalExecute invoking did not return. I can't fix it then I see you help. The python code is like this: def split_credit(user_name, region_code): with CDB(zc_dbUser, zc_dbPasswd, zc_dbCon) as db: #thread's own cx_Oracle.Connection credittable = "XXXX_XXXXX_08{0}".format(region_code) creditsql = """CREATE TABLE XX.XX_XXXXX_08{0} tablespace {1} AS SELECT .....""" db.tabledrop(user_name, credittable) db.dosql(creditsql) time.sleep(1) workers = [] for i in range(0, 10): region_code = "5"+str(i) workerCredit= threading.Thread(target=split_credit, name="credit_"+region_code, args=(zc_dbUser, region_code)) workers.append(workerCredit) workerCredit.start() |
From: Anthony T. <ant...@gm...> - 2017-03-02 15:52:12
|
Hi Joshua, The function Cursor_InternalExecute() will not return until the command has been completed. If you are dealing with a large amount of data it might just be taking a long time to do its work. The other possibility is that something in the database is blocking your statement from executing. You can examine the table dba_locks and v$session to see what is happening with your session. Anthony On Thu, Mar 2, 2017 at 1:26 AM, jos...@si... < jos...@si...> wrote: > Hi folks, > Need you help. I wrote a python script to create several tables which > select huge data from other tables in different python threads. I ran the > script several times and it always turn out to unable to complete. So i ran > gdb to check what the script was doing. The function stack show the > Cursor_InternalExecute invoking did not return. I can't fix it then I see > you help. The python code is like this: > > def split_credit(user_name, region_code): > with CDB(zc_dbUser, zc_dbPasswd, zc_dbCon) as db: #thread's own > cx_Oracle.Connection > credittable = "XXXX_XXXXX_08{0}".format(region_code) > creditsql = """CREATE TABLE XX.XX_XXXXX_08{0} tablespace {1} AS > SELECT .....""" > db.tabledrop(user_name, credittable) > db.dosql(creditsql) > time.sleep(1) > > workers = [] > for i in range(0, 10): > region_code = "5"+str(i) > workerCredit= threading.Thread(target=split_credit, > name="credit_"+region_code, args=(zc_dbUser, region_code)) > workers.append(workerCredit) > workerCredit.start() > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, SlashDot.org! http://sdm.link/slashdot > _______________________________________________ > cx-oracle-users mailing list > cx-...@li... > https://lists.sourceforge.net/lists/listinfo/cx-oracle-users > > |
From: Derry H. <con...@de...> - 2017-03-09 15:17:38
|
Hi there, I'm trying to parse data with variable fields and am looking for a more pythonic, efficient way to do this. Firstly I have a dict like so: smsreport_dict = dict: {'date': '2007-05-27 14:00:12 +0200', 'msgid': '2D67C865FB6FA25A9261C75E8D2F0F2B ', 'msgparts': '1', 'result': 'success'} This is compiled from an xml file, and the fields are not always the same in each xml file, however there are a maximum number of fields (and so field names) that the xml file can be. Consequently the Oracle DB has about 12 or so columns. I see two possible ways to execute the command, as follows: database_table_name = 'messaging' for field_name, input in smsreport_dict.items(): sql_insert = 'insert into ' + database_table_name + ' {0} values {1}'.format(field_name, input) cursor.execute(sql_insert) connection.commit() cursor.close() connection.close() # OR... sql_insert = 'insert into ' + database_table_name + ' (a, b, c, d) values (:1, :2, :3, :4)' cursor.prepare(sql_insert) cursor.executemany(None, smsreport_dict) connection.commit() cursor.close() connection.close() However if the field names are different depending on the xml file (therefore so too can the number of fields/columns) then how can I accomodate for this variation? I'm thinking it may be best to loop through "smsreport_dict" and match each field name i.e date, msgid etc. with the corresponding field/column name, that is already established in my Oracle DB. Any ideas? Regards, Derry. |
From: Shai B. <sh...@pl...> - 2017-03-09 16:01:49
|
Hi Derry, On Thursday 09 March 2017 17:17:30 Derry Holding wrote: > Hi there, > > I'm trying to parse data with variable fields and am looking for a more > pythonic, efficient way to do this. > Firstly I have a dict like so: > > smsreport_dict = dict: {'date': '2007-05-27 14:00:12 +0200', 'msgid': > '2D67C865FB6FA25A9261C75E8D2F0F2B ', 'msgparts': '1', 'result': 'success'} > > This is compiled from an xml file, and the fields are not always the same > in each xml file, however there are a maximum number of fields (and so > field names) that the xml file can be. Consequently the Oracle DB has > about 12 or so columns. I see two possible ways to execute the command, as > follows: > > database_table_name = 'messaging' > > for field_name, input in smsreport_dict.items(): > sql_insert = 'insert into ' + database_table_name + ' {0} > values {1}'.format(field_name, input) > cursor.execute(sql_insert) > This looks wrong twice: Once, because you're inserting each field into its own record here, and the sample contents you posted indicate that each such dict should be stored as a single record. And a second time, because you're formatting the values into the sql statement, which would expose you to sql injections (malicious or just erroneous ones). > > # OR... > > sql_insert = 'insert into ' + database_table_name + ' (a, b, c, d) > values (:1, :2, :3, :4)' > cursor.prepare(sql_insert) > cursor.executemany(None, smsreport_dict) You don't need executemany to insert a single record. You need to construct your insert statement and values, basically like so: columns, values = zip(smsreport_dict.items()) value_placeholders = [":%s" % (i+1) for i in range(len(values))] sql_insert = 'insert into {0} ({1}) values ({2})'.format( database_table_name, ", ".join(columns), ", ".join(value_placehoders) ) cursor.execute(sql_insert, values) Error handling, transaction handling, dealing with columns whose names need to be quoted, etc. are left as an exercise to the reader. HTH, Shai. |