|
From: Patrick D. <pat...@ac...> - 2009-02-23 14:29:32
|
Here are some explanations and examples I found of doing exactly this on an asp.net website. Different language and drivers of course but concepts exactly the same. This may provide good guidance. My guess is that at least with SQL Server, if you "set nocount on" at the beginning then insert, update and delete statements won't cause recordsets to be created. http://www.4guysfromrolla.com/webtech/083101-1.shtml and another using identities ... http://www.4guysfromrolla.com/webtech/tips/t122600-1.shtml Patrick Dunnigan Email: pat...@ac... Visit us at www.activecompliance.com This e-mail message is intended only for the use of the individual or entity to which it is addressed, and may contain information that is confidential. If you are not the intended recipient, any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by reply email. -----Original Message----- From: Kevin Kenny [mailto:kk...@ny...] Sent: Saturday, February 21, 2009 1:15 AM To: pat...@ac... Cc: tcl...@li... Subject: Re: [Tcl-tdbc] batching sql Patrick Dunnigan wrote: > Neither the stored procedure or separating the statements work in this case. > I have a lot of sql that's quite dynamic so a stored proc doesn't work and > the separate prepare / executes don't seem to keep temp tables in play. > > Take for example the following (overlook the fact that this could be done > efficiency in one sql, this just represents something much more complex). > > select sod.productid into #temp1 from sales.salesorderdetail sod; > select th.productid into #temp2 from production.transactionhistory th; > select t1.productid, count(*) from #temp1 t1 join #temp2 t2 > on t1.productid = t2.productid > group by t1.productid > having count(*) > 100000; > > This would be impossible to do in three separate prepare / executes because > the temp table seems to get destroyed after the first execute. That at least mildly surprises me, because I've dealt with other systems where temp tables persisted as long as the session or until explicitly dropped. No matter, I see what you're driving at. > Here's a link on the MS site on batching. > http://msdn.microsoft.com/en-us/library/ms131033.aspx > > I think a good compromise would be to only return the result set for the > final SQL statement and automatically pitch the rest. This is what we have > done in the past with batch sql in our in house library. Maybe determine how > many semicolons there are in the sql and execute each one, destroy the > results until you get to the final one, then execute and create the result > set? The it's just a matter of documenting that the final result set is the > only one that will be returned. Actually, I'm thinking that the job may be easier than I thought, since all the metadata is associated with the 'result set' object. I think that a "$resultset nextresult" call - returning 1 if there are more results and 0 otherwise - could be quite doable. (The command in question would do SQLMoreResults, and if more results are pending, would clean up bindings, call SQLCloseStmt(..., SQL_UNBIND), and bind the columns anew for the next result set. In that case, I think I'd make the [foreach] and [allrows] methods, at least on connection and statement objects, iterate each result set in turn, updating the '-columnsvariable' in between result sets. Unfortunately, these interfaces don't seem to provide a good way to notify the program when we move on to the next result, which is why I'd make the 'resultset' object have an explicit external iterator to move between result sets. I'm open to better suggestions. -- 73 de ke9tv/2, Kevin |