sqlrelay-discussion Mailing List for SQL Relay (Page 14)
Brought to you by:
mused
You can subscribe to this list here.
| 2005 |
Jan
|
Feb
(20) |
Mar
(27) |
Apr
(17) |
May
(32) |
Jun
(45) |
Jul
(49) |
Aug
(68) |
Sep
(44) |
Oct
(29) |
Nov
(64) |
Dec
(25) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2006 |
Jan
(61) |
Feb
(22) |
Mar
(25) |
Apr
(31) |
May
(18) |
Jun
(28) |
Jul
(19) |
Aug
(16) |
Sep
(8) |
Oct
(17) |
Nov
(32) |
Dec
(4) |
| 2007 |
Jan
(20) |
Feb
(25) |
Mar
(5) |
Apr
(12) |
May
(11) |
Jun
(18) |
Jul
(16) |
Aug
(22) |
Sep
(37) |
Oct
(20) |
Nov
(11) |
Dec
(2) |
| 2008 |
Jan
(11) |
Feb
(33) |
Mar
(12) |
Apr
(18) |
May
(22) |
Jun
(31) |
Jul
(23) |
Aug
(6) |
Sep
|
Oct
(10) |
Nov
(22) |
Dec
|
| 2009 |
Jan
(12) |
Feb
(8) |
Mar
(11) |
Apr
(20) |
May
(18) |
Jun
(7) |
Jul
(27) |
Aug
(2) |
Sep
(10) |
Oct
(5) |
Nov
(2) |
Dec
(1) |
| 2010 |
Jan
(11) |
Feb
(18) |
Mar
(10) |
Apr
(28) |
May
(28) |
Jun
|
Jul
(27) |
Aug
(9) |
Sep
(21) |
Oct
(2) |
Nov
(2) |
Dec
(11) |
| 2011 |
Jan
|
Feb
(2) |
Mar
(4) |
Apr
(2) |
May
(2) |
Jun
(44) |
Jul
(9) |
Aug
(2) |
Sep
(12) |
Oct
(7) |
Nov
(11) |
Dec
(7) |
| 2012 |
Jan
(5) |
Feb
|
Mar
(9) |
Apr
(9) |
May
(12) |
Jun
|
Jul
(13) |
Aug
(3) |
Sep
(3) |
Oct
(1) |
Nov
(1) |
Dec
(10) |
| 2013 |
Jan
(21) |
Feb
(3) |
Mar
(4) |
Apr
|
May
(3) |
Jun
(2) |
Jul
(3) |
Aug
(3) |
Sep
(3) |
Oct
|
Nov
|
Dec
(4) |
| 2014 |
Jan
(7) |
Feb
|
Mar
(1) |
Apr
|
May
(2) |
Jun
|
Jul
(4) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
|
Dec
|
| 2016 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2021 |
Jan
|
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Renat S. <sr...@st...> - 2010-05-18 19:17:18
|
18.05.2010 19:24, Cal Heldenbrand ?????: > Are your patches against the 0.41 release, or the CVS code? I'd like > to give a few of these a try. > They are against the 0.41, but some of them depends on others so you can't just apply the latest to the original 0.41 code. I have posted all patches to this list, but it might be better to get them from this location: http://www.srr.pp.ru/www/sqlrelay/ Be aware that 05_exit_on_broken_socket.patch requires patch to rudiments (poll.FULL.patch.gz <http://www.srr.pp.ru/www/sqlrelay/rudiments-0.32/poll.FULL.patch.gz>) |
|
From: Cal H. <ca...@fb...> - 2010-05-18 15:25:09
|
Renat, Are your patches against the 0.41 release, or the CVS code? I'd like to give a few of these a try. Thanks, --Cal On Tue, May 18, 2010 at 6:52 AM, Renat Sabitov <sr...@st...> wrote: > Hello David and others! > > I've discovered that busy listeners counter (semaphore #10) is not > decreased when auth error occures. > > Look at the sqlrlistener::clientSession. Counter is going to be decremented > in handOffClient, but if authstatus!=0 then counter gets corrupted (is never > decremented). > > As a result listener start forking on every query, which significally > decrease the speed and raise system resources consumption. > > -- > Renat Sabitov e-mail: sr...@st... > Stack Soft jid: sr...@ja... > > > ------------------------------------------------------------------------------ > > > _______________________________________________ > Sqlrelay-discussion mailing list > Sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlrelay-discussion > > |
|
From: Renat S. <sr...@st...> - 2010-05-18 11:52:59
|
Hello David and others! I've discovered that busy listeners counter (semaphore #10) is not decreased when auth error occures. Look at the sqlrlistener::clientSession. Counter is going to be decremented in handOffClient, but if authstatus!=0 then counter gets corrupted (is never decremented). As a result listener start forking on every query, which significally decrease the speed and raise system resources consumption. -- Renat Sabitov e-mail: sr...@st... Stack Soft jid: sr...@ja... |
|
From: Cal H. <ca...@fb...> - 2010-05-14 21:29:57
|
Hi all,
I'm not sure how well I can describe this bug, because I can't recreate this
issue with a simple C++ app. It only happens from within our web app. In
an odd set of circumstances with a relatively large query (15 columns) I get
a segfault when using the same sqlrcursor object, and calling sendQuery()
mulitple times with different queries.
Here's the stacktrace:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 46912515408576 (LWP 20445)]
0x00002aaaaae05ab5 in row::addField (this=0x1b961a30, column=15,
buffer=0x1b907d7f "", length=0)
at row/row.C:42
42 extrafields[column-OPTIMISTIC_COLUMN_COUNT]=(char
*)buffer;
Current language: auto; currently c++
(gdb) bt
#0 0x00002aaaaae05ab5 in row::addField (this=0x1b961a30, column=15,
buffer=0x1b907d7f "", length=0)
at row/row.C:42
#1 0x00002aaaaae04d92 in sqlrcursor::parseData (this=0x1b812d70) at
cursor/parsedata.C:200
#2 0x00002aaaaae02877 in sqlrcursor::processResultSet (this=0x1b812d70,
getallrows=false,
rowtoget=9) at cursor/receiveresultset.C:53
#3 0x00002aaaaae01b65 in sqlrcursor::runQuery (this=0x1b812d70,
query=0x1bb5b280 "select prefix,\n list_nbr,\n
ma_tech_id,\n house_nbr,\n box_nbr,\n
streetdirprefix,\n streetname,\n streetsuffix,\n
streetdirsuffix,\n streetaddli"...) at cursor/runquery.C:20
#4 0x00002aaaaae004e0 in sqlrcursor::executeQuery (this=0x1b812d70) at
cursor/executequery.C:23
#5 0x00002aaaaadffa84 in sqlrcursor::sendQuery (this=0x1b812d70,
query=0x1bb5b280 "select prefix,\n list_nbr,\n
ma_tech_id,\n house_nbr,\n box_nbr,\n
streetdirprefix,\n streetname,\n streetsuffix,\n
streetdirsuffix,\n streetaddli"...) at cursor/sendquery.C:9
And in frame 0:
(gdb) frame 0
#0 0x00002aaaaae05ab5 in row::addField (this=0x1b961a30, column=15,
buffer=0x1b907d7f "", length=0)
at row/row.C:42
42 extrafields[column-OPTIMISTIC_COLUMN_COUNT]=(char
*)buffer;
(gdb) print extrafields
$1 = (char **) 0x0
I've managed to work around this by deleting the sqlrcursor and creating a
new object for every query... but, I'm not sure how much overhead this
creates when compared to calling sendQuery() multiple times. I'm not sure
if this matters, but column 15 is "decimal(sold_price,14,0)" in this
particular query. (DB2) So far, out of a gagillion queries in our system,
this is the only one that creates a segfault.
Please let me know if you'd like more info!
Thanks,
--Cal
|
|
From: Renat S. <sr...@st...> - 2010-05-14 14:02:25
|
Hello David and others! I made a some of patches for sqlrelay and believe that you find them useful. 09_sessioncount_correction.patch - prevents sessioncount to go below zero 10_signal_handler_exit.patch - _exit() usage instead of exit() in signal handlers prevents double free on exit 11_status_more_info.patch - more status (totalconnections and sessioncount, semaphores) 12_scale_by_fork.patch - when scaler started with -fork option, it uses fork() instead of system() to open new connections. When connection died, scaler reap it with waitpid() and maintain connection counter by himself, so connection counter in shm does not affect scaler. I made this patch because connection counter in shm always get corrupted after two or three days running sqlrelay under heavy load with periods of no load at all. I'll try to find out why this happen, but i believe that this patch will greatly increase the stability of sqlrelay with dynamic scaling. -- Renat Sabitov e-mail: sr...@st... Stack Soft jid: sr...@ja... |
|
From: Renat S. <sr...@st...> - 2010-05-14 14:02:22
|
Hello David and others! I made a some of patches for sqlrelay and believe that you find them useful. 09_sessioncount_correction.patch - prevents sessioncount to go below zero 10_signal_handler_exit.patch - _exit() usage instead of exit() in signal handlers prevents double free on exit 11_status_more_info.patch - more status (totalconnections and sessioncount, semaphores) 12_scale_by_fork.patch - when scaler started with -fork option, it uses fork() instead of system() to open new connections. When connection died, scaler reap it with waitpid() and maintain connection counter by himself, so connection counter in shm does not affect scaler. I made this patch because connection counter in shm always get corrupted after two or three days running sqlrelay under heavy load with periods of no load at all. I'll try to find out why this happen, but i believe that this patch will greatly increase the stability of sqlrelay with dynamic scaling. -- Renat Sabitov e-mail: sr...@st... Stack Soft jid: sr...@ja... |
|
From: Renat S. <sr...@st...> - 2010-05-13 11:18:13
|
Hello, David. I have strange situation on the one of our servers with sqlrelay. Statistics shows that there are 12 server connections and 12 client connections, but actually there are only 4 working sqlr-connection processes. As far as I undestand the situation, 8 sqlr-connection processes were shut down abnormaly and they are exited without calling cleanUp() function. As a result statistics do not correspond to actual number of connections and scaler does not create new connection to handle incoming requests. I have not found the actual reason of this case, but why not to move incrementing and decrementing connection counter to scaler so abnormal connection exit would not affect it? In this case client connection counter would be still wrong, but this would not affect the scaler and handling requests. -- Renat Sabitov e-mail: sr...@st... Stack Soft jid: sr...@ja... |
|
From: Cal H. <ca...@fb...> - 2010-05-12 22:49:06
|
I managed to find a solution to this problem, if there are any big time DB2 users out there. If you need to run set commands on connection startup, you can edit the db2cli.ini file under the DB2 user's home directory. ( /home/db2guy/sqllib/cfg/db2cli.ini ) You can create a section for each catalog, or use the [COMMON] section to apply settings to all connections. Here's an example that fixed my issue: [COMMON] SysSchema=writeuser CURRENTSCHEMA=writeuser SchemaList="'SYSIBM','WRITEUSER','READUSER'" ;CurrentFunctionPath="'SYSFUN','WRITEUSER'" CurrentFunctionPath=sysfun,writeuser CurrentPackagePath=writeuser I'm still not quite sure which lines actually did the trick. The documentation is all over the place on syntax. After saving this file, just restart SQL Relay and the new settings should take affect. There are also mentions of performance tweaks for this file too, so it might be worth checking out more. Here's a few reference links: http://www.thefillmoregroup.com/blog/?p=80 http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp?topic=/com.ibm.db2.luw.apdv.cli.doc/doc/c0007882.html --Cal On Thu, Apr 29, 2010 at 9:17 AM, Cal Heldenbrand <ca...@fb...> wrote: > Hi all, > > I've read through the docs and haven't seen a way to accomplish what I need > with our setup, but perhaps someone could make some suggestions. We use DB2 > with Q replication<http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.ii.doc/admin/cqrqrepl.htm>, > in which a read-write master replicates to read only slaves. In order to > enforce a read only environment, we had to create a new username on the > slaves that doesn't allow writes. (The read-write username is still needed, > since Qrep uses that) > > Right now my DB2 client randomly selects a slave server for a particular > set of queries, connects as our "readuser" and then runs these set commands: > > set schema writeuser > set current path=writeuser > set current function path=sysfun,writeuser > > That does some magic to allow the readuser to access the tables and > functions of the writeuser. In switching to SQL Relay with routing, I need > to find some method to run these queries when the connection is first > created to a read only server, before the client sends a query. > > Is this possible? Currently, I just have an instance setup for each read > only server, and I'm using the same logic in my client to pick the server > and run the set queries. It's not perfect though, so I'd like to have SQL > Relay do this job. :-) > > Thanks, > > --Cal > |
|
From: Cal H. <ca...@fb...> - 2010-05-03 19:51:06
|
Hi everyone, A quick question on locale settings in SQL Relay -- is there any way to request the returned string of sqlrcursor::getField() in a specific encoding? In one example, the registered trademark symbol (®) from DB2 comes back as two characters, ® which leads me to believe SQL Relay is converting to UTF-8 somewhere. Our data in DB2 is stored as Windows-1252, and we have our web servers throw out a Content-Type header of ISO-8859-1. (When I set the Content-type header to UTF-8, my test application displays normally) Is there an "easy" method to get the raw contents from the database without the translation? (Like, commenting something out in the sql relay source) Or, do you plan on adding locale translations to the API in the near future? Thank you, --Cal |
|
From: Cal H. <ca...@fb...> - 2010-04-29 14:17:49
|
Hi all, I've read through the docs and haven't seen a way to accomplish what I need with our setup, but perhaps someone could make some suggestions. We use DB2 with Q replication<http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.ii.doc/admin/cqrqrepl.htm>, in which a read-write master replicates to read only slaves. In order to enforce a read only environment, we had to create a new username on the slaves that doesn't allow writes. (The read-write username is still needed, since Qrep uses that) Right now my DB2 client randomly selects a slave server for a particular set of queries, connects as our "readuser" and then runs these set commands: set schema writeuser set current path=writeuser set current function path=sysfun,writeuser That does some magic to allow the readuser to access the tables and functions of the writeuser. In switching to SQL Relay with routing, I need to find some method to run these queries when the connection is first created to a read only server, before the client sends a query. Is this possible? Currently, I just have an instance setup for each read only server, and I'm using the same logic in my client to pick the server and run the set queries. It's not perfect though, so I'd like to have SQL Relay do this job. :-) Thanks, --Cal |
|
From: Cal H. <ca...@fb...> - 2010-04-23 17:28:54
|
Excellent, thank you! On Fri, Apr 23, 2010 at 12:05 PM, David Muse <dav...@fi...>wrote: > There's an item on the todo list for making the server-side fetch-at-once > size configurable in the sqlrelay.conf file, but not from a client call. > Not a bad idea though. I'll add that to the list. > > Dave > dav...@fi... > > > On 04/23/2010 12:00 PM, Cal Heldenbrand wrote: > > Hi David, thanks for the info! That's what I was thinking as well, that > the prefetch size of 10 was helping here, but I was really surprised at the > difference. I'm definitely impressed with your code! > > My application is a mix of DB2 CLI calls and SQLAPI<http://www.sqlapi.com>. > So far, some benchmarking between SQL Relay and SQLAPI is similar for fetch > speeds, with SQL Relay being just a bit slower for larger queries. (About > 10ms slower than SQLAPI, both set to 100 row prefetch with an 8000 row > query) I'm guessing the extra network hop is in play there, but the added > benefit of connection pooling saves me about 30 - 40ms in connecting and > logging into DB2. > > It's difficult to determine what the best setting for FETCH_AT_ONCE is. > The types of queries across our application vary quite a bit, from single > row fetches, to massive statistical queries spanning millions of rows. I've > found that 10 is a happy medium, but it would be nice to be able to change > that value from the client API. (Rather than recompiling/restarting the > sqlr server) Is that on your TODO list at all? > > Thank you, > > --Cal > > > On Fri, Apr 23, 2010 at 10:39 AM, David Muse <dav...@fi...>wrote: > >> In the example you used, the SQL Relay connection daemon runs the query, >> but doesn't fetch any rows immediately. When the getField() method is >> called, the connection daemon calls SQLFetchScroll over and over until it >> reaches the requested row. However, since the SQL_ATTR_ROW_ARRAY_SIZE is >> set >> to 10, it fetches these rows from the db2 server in groups of 10, reducing >> the number of round-trips to the DB. The rows are also fetched into >> pre-allocated, reused buffers, so there's no memory allocation penalty. >> when the requested row is reached, since setResultSetBufferSize() is set to >> 100, the next 100 rows are sent in a single round trip from the SQL Relay >> server to the client. Like the skipped rows, those rows are fetched from >> the DB in groups of 10 as well. The client side uses an optimized buffer >> allocation strategy as well to store the result set. >> >> So that's what's going on. Why that yields such a speed improvement is >> another question. I implemented SQL Relay taking advantage of as many >> optimizations as I could think of. It could be just a result of that. It's >> possible that highly optimized, native DB2 code could perform as well. >> >> My guess would be that the most significant speed improvements are due to >> the multi-row fetching (10 at a time) between the DB and SQL Relay >> connection daemon, which results in fewer round trips to the DB and the >> multi-row fetching (100 at a time) between the SQL Relay server and client, >> which results in fewer round trips there. Between the two, everything runs >> faster. >> >> There might even be further tuning that could be done. A larger result >> set buffer size reduces the number of round trips to fetch rows, which >> speeds things up, but requires more memory to be allocated on the client >> side, which slows things down. As such, for each query there's a >> speed-optimal result set buffer size. You could determine it by writing a >> program to run the same query over and over with different result set buffer >> sizes, fetch the same row each time and collect speed stats. >> >> David Muse >> dav...@fi... >> >> >> On 04/22/2010 05:55 PM, Cal Heldenbrand wrote: >> >> Oh, I see now that I don't have to seek through results. Calling >> getField(29999, "id") seems to work just fine, giving me the 30000'th record >> in the result set. Still, this seems pretty magical, I'm curious what's >> happening here! >> >> Thanks, >> >> --Cal >> >> On Thu, Apr 22, 2010 at 4:46 PM, Cal Heldenbrand <ca...@fb...> wrote: >> >>> Hello everyone, >>> >>> In experimenting with porting my DB2 CLI application over to SQL Relay's >>> API, I haven't found how to implement scrollable cursors... but I believe I >>> found a workaround, and I'm curious why it appears to be faster than >>> cursors. >>> >>> Here's a little snippet of what I'm doing with DB2: >>> >>> // set cursor type to static. >>> SQLSetStmtAttr( *stmt, SQL_ATTR_CURSOR_TYPE, >>> (SQLPOINTER)SQL_CURSOR_STATIC, 0 ); >>> >>> SQLExecute( *stmt ); >>> >>> SQLFetchScroll( *stmt, SQL_FETCH_ABSOLUTE, firstfetchoffset ); >>> >>> // after the first fetch, I call: >>> SQLFetch( *stmt ); >>> >>> Selecting any given 5 rows out of a 40,000 row result set takes around >>> 300 - 400ms with this method. >>> >>> To implement a similar behavior, I hacked this test up in about 5 >>> minutes. This throws out result set buffers until it reaches row 30000, >>> then fetches 5 rows and breaks out. >>> >>> cur->setResultSetBufferSize(100); >>> cur->sendQuery("select id from logon"); >>> int zoom = 30000 / 100; >>> for ( unsigned int i = 0 ; i < 30000 ; i += zoom ) >>> { >>> if ( (tmp = (char *)cur->getField(i, (uint32_t)0)) != NULL ) >>> printf("skip %d: %s\n", i, tmp); >>> else >>> break; >>> } >>> for ( unsigned int i = 29999 ; i < 30005 ; i++ ) >>> { >>> if ( tmp = (char *)cur->getField(i, "id") ) >>> printf("skip %d: %s\n", i, tmp); >>> } >>> >>> >>> This process seems to work fine, and happens in about 50ms! I guess I'm >>> just curious how this is happening internally, and how seeking through the >>> result set in this manner is so much faster than a static cursor. (And >>> also if there is a better method to accomplish this) >>> >>> Thanks for your time! >>> >>> --Cal >>> >>> >>> >> >> ------------------------------------------------------------------------------ >> >> >> _______________________________________________________ >> Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting >> http://www.doteasy.com >> >> >> _______________________________________________ >> Sqlrelay-discussion mailing lis...@li...://lists.sourceforge.net/lists/listinfo/sqlrelay-discussion >> >> >> _______________________________________________________ >> Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting >> http://www.doteasy.com >> >> >> _______________________________________________________ >> Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting >> http://www.doteasy.com >> >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Sqlrelay-discussion mailing list >> Sql...@li... >> https://lists.sourceforge.net/lists/listinfo/sqlrelay-discussion >> >> > > ------------------------------------------------------------------------------ > > > _______________________________________________________ > Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting > http://www.doteasy.com > > > _______________________________________________ > Sqlrelay-discussion mailing lis...@li...://lists.sourceforge.net/lists/listinfo/sqlrelay-discussion > > > _______________________________________________________ > Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting > http://www.doteasy.com > > > _______________________________________________________ > Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting > http://www.doteasy.com > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Sqlrelay-discussion mailing list > Sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlrelay-discussion > > |
|
From: David M. <dav...@fi...> - 2010-04-23 17:21:52
|
There's an item on the todo list for making the server-side fetch-at-once size configurable in the sqlrelay.conf file, but not from a client call. Not a bad idea though. I'll add that to the list. Dave dav...@fi... On 04/23/2010 12:00 PM, Cal Heldenbrand wrote: > Hi David, thanks for the info! That's what I was thinking as well, > that the prefetch size of 10 was helping here, but I was really > surprised at the difference. I'm definitely impressed with your code! > > My application is a mix of DB2 CLI calls and SQLAPI > <http://www.sqlapi.com>. So far, some benchmarking between SQL Relay > and SQLAPI is similar for fetch speeds, with SQL Relay being just a > bit slower for larger queries. (About 10ms slower than SQLAPI, both > set to 100 row prefetch with an 8000 row query) I'm guessing the > extra network hop is in play there, but the added benefit of > connection pooling saves me about 30 - 40ms in connecting and logging > into DB2. > > It's difficult to determine what the best setting for FETCH_AT_ONCE > is. The types of queries across our application vary quite a bit, > from single row fetches, to massive statistical queries spanning > millions of rows. I've found that 10 is a happy medium, but it would > be nice to be able to change that value from the client API. (Rather > than recompiling/restarting the sqlr server) Is that on your TODO > list at all? > > Thank you, > > --Cal > > > On Fri, Apr 23, 2010 at 10:39 AM, David Muse > <dav...@fi... <mailto:dav...@fi...>> wrote: > > In the example you used, the SQL Relay connection daemon runs the > query, but doesn't fetch any rows immediately. When the > getField() method is called, the connection daemon calls > SQLFetchScroll over and over until it reaches the requested row. > However, since the SQL_ATTR_ROW_ARRAY_SIZE is set > to 10, it fetches these rows from the db2 server in groups of 10, > reducing the number of round-trips to the DB. The rows are also > fetched into pre-allocated, reused buffers, so there's no memory > allocation penalty. when the requested row is reached, since > setResultSetBufferSize() is set to 100, the next 100 rows are sent > in a single round trip from the SQL Relay server to the client. > Like the skipped rows, those rows are fetched from the DB in > groups of 10 as well. The client side uses an optimized buffer > allocation strategy as well to store the result set. > > So that's what's going on. Why that yields such a speed > improvement is another question. I implemented SQL Relay taking > advantage of as many optimizations as I could think of. It could > be just a result of that. It's possible that highly optimized, > native DB2 code could perform as well. > > My guess would be that the most significant speed improvements are > due to the multi-row fetching (10 at a time) between the DB and > SQL Relay connection daemon, which results in fewer round trips to > the DB and the multi-row fetching (100 at a time) between the SQL > Relay server and client, which results in fewer round trips > there. Between the two, everything runs faster. > > There might even be further tuning that could be done. A larger > result set buffer size reduces the number of round trips to fetch > rows, which speeds things up, but requires more memory to be > allocated on the client side, which slows things down. As such, > for each query there's a speed-optimal result set buffer size. > You could determine it by writing a program to run the same query > over and over with different result set buffer sizes, fetch the > same row each time and collect speed stats. > > David Muse > dav...@fi... <mailto:dav...@fi...> > > > On 04/22/2010 05:55 PM, Cal Heldenbrand wrote: >> Oh, I see now that I don't have to seek through results. Calling >> getField(29999, "id") seems to work just fine, giving me the >> 30000'th record in the result set. Still, this seems pretty >> magical, I'm curious what's happening here! >> >> Thanks, >> >> --Cal >> >> On Thu, Apr 22, 2010 at 4:46 PM, Cal Heldenbrand <ca...@fb... >> <mailto:ca...@fb...>> wrote: >> >> Hello everyone, >> >> In experimenting with porting my DB2 CLI application over to >> SQL Relay's API, I haven't found how to implement scrollable >> cursors... but I believe I found a workaround, and I'm >> curious why it appears to be faster than cursors. >> >> Here's a little snippet of what I'm doing with DB2: >> >> // set cursor type to static. >> SQLSetStmtAttr( *stmt, SQL_ATTR_CURSOR_TYPE, >> (SQLPOINTER)SQL_CURSOR_STATIC, 0 ); >> >> SQLExecute( *stmt ); >> >> SQLFetchScroll( *stmt, SQL_FETCH_ABSOLUTE, firstfetchoffset ); >> >> // after the first fetch, I call: >> SQLFetch( *stmt ); >> >> Selecting any given 5 rows out of a 40,000 row result set >> takes around 300 - 400ms with this method. >> >> To implement a similar behavior, I hacked this test up in >> about 5 minutes. This throws out result set buffers until it >> reaches row 30000, then fetches 5 rows and breaks out. >> >> cur->setResultSetBufferSize(100); >> cur->sendQuery("select id from logon"); >> int zoom = 30000 / 100; >> for ( unsigned int i = 0 ; i < 30000 ; i += zoom ) >> { >> if ( (tmp = (char *)cur->getField(i, (uint32_t)0)) != >> NULL ) >> printf("skip %d: %s\n", i, tmp); >> else >> break; >> } >> for ( unsigned int i = 29999 ; i < 30005 ; i++ ) >> { >> if ( tmp = (char *)cur->getField(i, "id") ) >> printf("skip %d: %s\n", i, tmp); >> } >> >> >> This process seems to work fine, and happens in about 50ms! >> I guess I'm just curious how this is happening internally, >> and how seeking through the result set in this manner is so >> much faster than a static cursor. (And also if there is a >> better method to accomplish this) >> >> Thanks for your time! >> >> --Cal >> >> >> >> >> ------------------------------------------------------------------------------ >> >> >> _______________________________________________________ >> Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting >> http://www.doteasy.com >> >> >> _______________________________________________ >> Sqlrelay-discussion mailing list >> Sql...@li... <mailto:Sql...@li...> >> https://lists.sourceforge.net/lists/listinfo/sqlrelay-discussion >> >> >> _______________________________________________________ >> Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting >> http://www.doteasy.com > > _______________________________________________________ > Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting > http://www.doteasy.com > > ------------------------------------------------------------------------------ > > _______________________________________________ > Sqlrelay-discussion mailing list > Sql...@li... > <mailto:Sql...@li...> > https://lists.sourceforge.net/lists/listinfo/sqlrelay-discussion > > > > ------------------------------------------------------------------------------ > > > _______________________________________________________ > Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting > http://www.doteasy.com > > > _______________________________________________ > Sqlrelay-discussion mailing list > Sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlrelay-discussion > > > _______________________________________________________ > Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting > http://www.doteasy.com _______________________________________________________ Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting http://www.doteasy.com |
|
From: Cal H. <ca...@fb...> - 2010-04-23 16:00:33
|
Hi David, thanks for the info! That's what I was thinking as well, that the prefetch size of 10 was helping here, but I was really surprised at the difference. I'm definitely impressed with your code! My application is a mix of DB2 CLI calls and SQLAPI <http://www.sqlapi.com>. So far, some benchmarking between SQL Relay and SQLAPI is similar for fetch speeds, with SQL Relay being just a bit slower for larger queries. (About 10ms slower than SQLAPI, both set to 100 row prefetch with an 8000 row query) I'm guessing the extra network hop is in play there, but the added benefit of connection pooling saves me about 30 - 40ms in connecting and logging into DB2. It's difficult to determine what the best setting for FETCH_AT_ONCE is. The types of queries across our application vary quite a bit, from single row fetches, to massive statistical queries spanning millions of rows. I've found that 10 is a happy medium, but it would be nice to be able to change that value from the client API. (Rather than recompiling/restarting the sqlr server) Is that on your TODO list at all? Thank you, --Cal On Fri, Apr 23, 2010 at 10:39 AM, David Muse <dav...@fi...>wrote: > In the example you used, the SQL Relay connection daemon runs the query, > but doesn't fetch any rows immediately. When the getField() method is > called, the connection daemon calls SQLFetchScroll over and over until it > reaches the requested row. However, since the SQL_ATTR_ROW_ARRAY_SIZE is > set > to 10, it fetches these rows from the db2 server in groups of 10, reducing > the number of round-trips to the DB. The rows are also fetched into > pre-allocated, reused buffers, so there's no memory allocation penalty. > when the requested row is reached, since setResultSetBufferSize() is set to > 100, the next 100 rows are sent in a single round trip from the SQL Relay > server to the client. Like the skipped rows, those rows are fetched from > the DB in groups of 10 as well. The client side uses an optimized buffer > allocation strategy as well to store the result set. > > So that's what's going on. Why that yields such a speed improvement is > another question. I implemented SQL Relay taking advantage of as many > optimizations as I could think of. It could be just a result of that. It's > possible that highly optimized, native DB2 code could perform as well. > > My guess would be that the most significant speed improvements are due to > the multi-row fetching (10 at a time) between the DB and SQL Relay > connection daemon, which results in fewer round trips to the DB and the > multi-row fetching (100 at a time) between the SQL Relay server and client, > which results in fewer round trips there. Between the two, everything runs > faster. > > There might even be further tuning that could be done. A larger result set > buffer size reduces the number of round trips to fetch rows, which speeds > things up, but requires more memory to be allocated on the client side, > which slows things down. As such, for each query there's a speed-optimal > result set buffer size. You could determine it by writing a program to run > the same query over and over with different result set buffer sizes, fetch > the same row each time and collect speed stats. > > David Muse > dav...@fi... > > > On 04/22/2010 05:55 PM, Cal Heldenbrand wrote: > > Oh, I see now that I don't have to seek through results. Calling > getField(29999, "id") seems to work just fine, giving me the 30000'th record > in the result set. Still, this seems pretty magical, I'm curious what's > happening here! > > Thanks, > > --Cal > > On Thu, Apr 22, 2010 at 4:46 PM, Cal Heldenbrand <ca...@fb...> wrote: > >> Hello everyone, >> >> In experimenting with porting my DB2 CLI application over to SQL Relay's >> API, I haven't found how to implement scrollable cursors... but I believe I >> found a workaround, and I'm curious why it appears to be faster than >> cursors. >> >> Here's a little snippet of what I'm doing with DB2: >> >> // set cursor type to static. >> SQLSetStmtAttr( *stmt, SQL_ATTR_CURSOR_TYPE, >> (SQLPOINTER)SQL_CURSOR_STATIC, 0 ); >> >> SQLExecute( *stmt ); >> >> SQLFetchScroll( *stmt, SQL_FETCH_ABSOLUTE, firstfetchoffset ); >> >> // after the first fetch, I call: >> SQLFetch( *stmt ); >> >> Selecting any given 5 rows out of a 40,000 row result set takes around 300 >> - 400ms with this method. >> >> To implement a similar behavior, I hacked this test up in about 5 >> minutes. This throws out result set buffers until it reaches row 30000, >> then fetches 5 rows and breaks out. >> >> cur->setResultSetBufferSize(100); >> cur->sendQuery("select id from logon"); >> int zoom = 30000 / 100; >> for ( unsigned int i = 0 ; i < 30000 ; i += zoom ) >> { >> if ( (tmp = (char *)cur->getField(i, (uint32_t)0)) != NULL ) >> printf("skip %d: %s\n", i, tmp); >> else >> break; >> } >> for ( unsigned int i = 29999 ; i < 30005 ; i++ ) >> { >> if ( tmp = (char *)cur->getField(i, "id") ) >> printf("skip %d: %s\n", i, tmp); >> } >> >> >> This process seems to work fine, and happens in about 50ms! I guess I'm >> just curious how this is happening internally, and how seeking through the >> result set in this manner is so much faster than a static cursor. (And >> also if there is a better method to accomplish this) >> >> Thanks for your time! >> >> --Cal >> >> >> > > ------------------------------------------------------------------------------ > > > _______________________________________________________ > Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting > http://www.doteasy.com > > > _______________________________________________ > Sqlrelay-discussion mailing lis...@li...://lists.sourceforge.net/lists/listinfo/sqlrelay-discussion > > > _______________________________________________________ > Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting > http://www.doteasy.com > > > _______________________________________________________ > Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting > http://www.doteasy.com > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Sqlrelay-discussion mailing list > Sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlrelay-discussion > > |
|
From: David M. <dav...@fi...> - 2010-04-23 15:39:44
|
In the example you used, the SQL Relay connection daemon runs the query,
but doesn't fetch any rows immediately. When the getField() method is
called, the connection daemon calls SQLFetchScroll over and over until
it reaches the requested row. However, since the
SQL_ATTR_ROW_ARRAY_SIZE is set
to 10, it fetches these rows from the db2 server in groups of 10,
reducing the number of round-trips to the DB. The rows are also fetched
into pre-allocated, reused buffers, so there's no memory allocation
penalty. when the requested row is reached, since
setResultSetBufferSize() is set to 100, the next 100 rows are sent in a
single round trip from the SQL Relay server to the client. Like the
skipped rows, those rows are fetched from the DB in groups of 10 as
well. The client side uses an optimized buffer allocation strategy as
well to store the result set.
So that's what's going on. Why that yields such a speed improvement is
another question. I implemented SQL Relay taking advantage of as many
optimizations as I could think of. It could be just a result of that.
It's possible that highly optimized, native DB2 code could perform as well.
My guess would be that the most significant speed improvements are due
to the multi-row fetching (10 at a time) between the DB and SQL Relay
connection daemon, which results in fewer round trips to the DB and the
multi-row fetching (100 at a time) between the SQL Relay server and
client, which results in fewer round trips there. Between the two,
everything runs faster.
There might even be further tuning that could be done. A larger result
set buffer size reduces the number of round trips to fetch rows, which
speeds things up, but requires more memory to be allocated on the client
side, which slows things down. As such, for each query there's a
speed-optimal result set buffer size. You could determine it by writing
a program to run the same query over and over with different result set
buffer sizes, fetch the same row each time and collect speed stats.
David Muse
dav...@fi...
On 04/22/2010 05:55 PM, Cal Heldenbrand wrote:
> Oh, I see now that I don't have to seek through results. Calling
> getField(29999, "id") seems to work just fine, giving me the 30000'th
> record in the result set. Still, this seems pretty magical, I'm
> curious what's happening here!
>
> Thanks,
>
> --Cal
>
> On Thu, Apr 22, 2010 at 4:46 PM, Cal Heldenbrand <ca...@fb...
> <mailto:ca...@fb...>> wrote:
>
> Hello everyone,
>
> In experimenting with porting my DB2 CLI application over to SQL
> Relay's API, I haven't found how to implement scrollable
> cursors... but I believe I found a workaround, and I'm curious why
> it appears to be faster than cursors.
>
> Here's a little snippet of what I'm doing with DB2:
>
> // set cursor type to static.
> SQLSetStmtAttr( *stmt, SQL_ATTR_CURSOR_TYPE,
> (SQLPOINTER)SQL_CURSOR_STATIC, 0 );
>
> SQLExecute( *stmt );
>
> SQLFetchScroll( *stmt, SQL_FETCH_ABSOLUTE, firstfetchoffset );
>
> // after the first fetch, I call:
> SQLFetch( *stmt );
>
> Selecting any given 5 rows out of a 40,000 row result set takes
> around 300 - 400ms with this method.
>
> To implement a similar behavior, I hacked this test up in about 5
> minutes. This throws out result set buffers until it reaches row
> 30000, then fetches 5 rows and breaks out.
>
> cur->setResultSetBufferSize(100);
> cur->sendQuery("select id from logon");
> int zoom = 30000 / 100;
> for ( unsigned int i = 0 ; i < 30000 ; i += zoom )
> {
> if ( (tmp = (char *)cur->getField(i, (uint32_t)0)) != NULL )
> printf("skip %d: %s\n", i, tmp);
> else
> break;
> }
> for ( unsigned int i = 29999 ; i < 30005 ; i++ )
> {
> if ( tmp = (char *)cur->getField(i, "id") )
> printf("skip %d: %s\n", i, tmp);
> }
>
>
> This process seems to work fine, and happens in about 50ms! I
> guess I'm just curious how this is happening internally, and how
> seeking through the result set in this manner is so much faster
> than a static cursor. (And also if there is a better method to
> accomplish this)
>
> Thanks for your time!
>
> --Cal
>
>
>
>
> ------------------------------------------------------------------------------
>
>
> _______________________________________________________
> Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting
> http://www.doteasy.com
>
>
> _______________________________________________
> Sqlrelay-discussion mailing list
> Sql...@li...
> https://lists.sourceforge.net/lists/listinfo/sqlrelay-discussion
>
>
> _______________________________________________________
> Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting
> http://www.doteasy.com
_______________________________________________________
Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting
http://www.doteasy.com |
|
From: Cal H. <ca...@fb...> - 2010-04-22 21:56:11
|
Oh, I see now that I don't have to seek through results. Calling
getField(29999, "id") seems to work just fine, giving me the 30000'th record
in the result set. Still, this seems pretty magical, I'm curious what's
happening here!
Thanks,
--Cal
On Thu, Apr 22, 2010 at 4:46 PM, Cal Heldenbrand <ca...@fb...> wrote:
> Hello everyone,
>
> In experimenting with porting my DB2 CLI application over to SQL Relay's
> API, I haven't found how to implement scrollable cursors... but I believe I
> found a workaround, and I'm curious why it appears to be faster than
> cursors.
>
> Here's a little snippet of what I'm doing with DB2:
>
> // set cursor type to static.
> SQLSetStmtAttr( *stmt, SQL_ATTR_CURSOR_TYPE, (SQLPOINTER)SQL_CURSOR_STATIC,
> 0 );
>
> SQLExecute( *stmt );
>
> SQLFetchScroll( *stmt, SQL_FETCH_ABSOLUTE, firstfetchoffset );
>
> // after the first fetch, I call:
> SQLFetch( *stmt );
>
> Selecting any given 5 rows out of a 40,000 row result set takes around 300
> - 400ms with this method.
>
> To implement a similar behavior, I hacked this test up in about 5 minutes.
> This throws out result set buffers until it reaches row 30000, then fetches
> 5 rows and breaks out.
>
> cur->setResultSetBufferSize(100);
> cur->sendQuery("select id from logon");
> int zoom = 30000 / 100;
> for ( unsigned int i = 0 ; i < 30000 ; i += zoom )
> {
> if ( (tmp = (char *)cur->getField(i, (uint32_t)0)) != NULL )
> printf("skip %d: %s\n", i, tmp);
> else
> break;
> }
> for ( unsigned int i = 29999 ; i < 30005 ; i++ )
> {
> if ( tmp = (char *)cur->getField(i, "id") )
> printf("skip %d: %s\n", i, tmp);
> }
>
>
> This process seems to work fine, and happens in about 50ms! I guess I'm
> just curious how this is happening internally, and how seeking through the
> result set in this manner is so much faster than a static cursor. (And
> also if there is a better method to accomplish this)
>
> Thanks for your time!
>
> --Cal
>
>
>
|
|
From: Cal H. <ca...@fb...> - 2010-04-22 21:47:19
|
Hello everyone,
In experimenting with porting my DB2 CLI application over to SQL Relay's
API, I haven't found how to implement scrollable cursors... but I believe I
found a workaround, and I'm curious why it appears to be faster than
cursors.
Here's a little snippet of what I'm doing with DB2:
// set cursor type to static.
SQLSetStmtAttr( *stmt, SQL_ATTR_CURSOR_TYPE, (SQLPOINTER)SQL_CURSOR_STATIC,
0 );
SQLExecute( *stmt );
SQLFetchScroll( *stmt, SQL_FETCH_ABSOLUTE, firstfetchoffset );
// after the first fetch, I call:
SQLFetch( *stmt );
Selecting any given 5 rows out of a 40,000 row result set takes around 300 -
400ms with this method.
To implement a similar behavior, I hacked this test up in about 5 minutes.
This throws out result set buffers until it reaches row 30000, then fetches
5 rows and breaks out.
cur->setResultSetBufferSize(100);
cur->sendQuery("select id from logon");
int zoom = 30000 / 100;
for ( unsigned int i = 0 ; i < 30000 ; i += zoom )
{
if ( (tmp = (char *)cur->getField(i, (uint32_t)0)) != NULL )
printf("skip %d: %s\n", i, tmp);
else
break;
}
for ( unsigned int i = 29999 ; i < 30005 ; i++ )
{
if ( tmp = (char *)cur->getField(i, "id") )
printf("skip %d: %s\n", i, tmp);
}
This process seems to work fine, and happens in about 50ms! I guess I'm
just curious how this is happening internally, and how seeking through the
result set in this manner is so much faster than a static cursor. (And
also if there is a better method to accomplish this)
Thanks for your time!
--Cal
|
|
From: Matias R. <cy...@sa...> - 2010-04-21 20:16:36
|
Hi David,
Thanks for your reply. My problem with having a different instance for each client is that my application may have up to 500 clients... and having 500 instances I think it would be way too much.
Maybe adding the posibility of using variables would be very interesting. Maybe something like this?
<users>
<user user="clientid1" password="clientid1"/>
<user user="clientid2" password="clientid2"/>
<user user="clientid3" password="clientid3"/>
<user user="clientid4" password="clientid4"/>
</users>
<connections>
</connections>
<router>
<!-- send queries to "mysqldb" which match the
specified patterns -->
<route host="" port="" socket="/tmp/database.socket" user="mysqluser" password="mysqlpassword">
<!-- allow selects with a where clause -->
<query pattern="^\s*select\s+.*\s+from\s+.*\s+where\s+client_id=$clientidN"/>
</route>
<!-- all other queries will be filtered out -->
<filter>
<query pattern=".*"/>
</filter>
</router>
And when using "$clientidN" translate the variable to the username that was actually used to log into that specific connectio. I think that having the ability to do something like that will be very interesting (not only to say that will solve my problem :) )
Regards,
--
Matias Rollan
<cy...@sa...>
On Wed, Apr 21, 2010 at 11:42:03AM -0400, David Muse wrote:
> Hello Matias,
>
> I'm glad you got it working.
>
> You can have multiple queries per router instance. For instance, you
> can add another query tag with a pattern for updates to that same router
> instance, rather than having to create a separate router instance for
> updates.
>
> But you'd need to create a separate instance for each client.
>
> I've recently gotten requests for a variety of enhanced routing
> features, including several user-based routing features. They probably
> won't make it into the very next release, 0.42, but are on the list for
> 0.43.
>
> David Muse
> dav...@fi...
>
> On 04/14/2010 10:40 AM, Matias Rollan wrote:
> > Hola!
> >
> > I created the following sqlrelay.conf and so far is working as expected. Only filtering select where client_id=2 for the sqlrelay "clientid2" user.
> >
> > <?xml version="1.0"?>
> > <!DOCTYPE instances SYSTEM "sqlrelay.dtd">
> >
> > <instances>
> >
> > <!-- This instance maintains connections to a MySQL database -->
> > <instance id="database" port="" socket="/tmp/database.socket" dbase="mysql" connections="3" maxconnections="15" maxqueuelength="5" growby="1" ttl="60" endofsession="commit" sessiontimeout="600" runasuser="sqlrelay" runasgroup="sqlrelay" cursors="5" authtier="listener" handoff="pass" deniedips="" allowedips="" debug="none" maxquerysize="65536" maxstringbindvaluelength="4000" maxlobbindvaluelength="71680" idleclienttimeout="-1">
> > <users>
> > <user user="mysqluser" password="mysqlpassword"/>
> > </users>
> > <connections>
> > <connection connectionid="db" string="user=mysqluser;password=mysqlpassword;host=localhost;db=database;" metric="1" behindloadbalancer="no"/>
> > </connections>
> > </instance>
> >
> >
> > <!-- This instance sends queries to the
> > mysqldb instance after filtering them -->
> > <instance id="router" port="9001" socket="/tmp/router.socket" dbase="router" connections="3" maxconnections="15" maxqueuelength="5" growby="1" ttl="60" endofsession="commit" sessiontimeout="600" runasuser="sqlrelay" runasgroup="sqlrelay" cursors="5" authtier="listener" handoff="pass" deniedips="" allowedips="" debug="none" maxquerysize="65536" maxstringbindvaluelength="4000" maxlobbindvaluelength="71680" idleclienttimeout="-1">
> > <users>
> > <user user="clientid2" password="clientid2"/>
> > </users>
> > <connections>
> > </connections>
> > <router>
> > <!-- send queries to "mysqldb" which match the
> > specified patterns -->
> > <route host="" port="" socket="/tmp/database.socket" user="mysqluser" password="mysqlpassword">
> > <!-- allow selects with a where clause -->
> > <query pattern="^\s*select\s+.*\s+from\s+.*\s+where\s+client_id=2"/>
> > </route>
> > <!-- all other queries will be filtered out -->
> > <filter>
> > <query pattern=".*"/>
> > </filter>
> > </router>
> > </instance>
> >
> >
> > </instances>
> >
> > I only found that that the dbase="router" wasn't working because the "router" value wasn't listed in sqlrelay.dtd . I added it to sqlrelay.dtd file and now is working fine.
> >
> > The sqlrelay version that I am currently using is:
> > ii sqlrelay 1:0.39.4-6ubuntu1 Database connection pooling, proxying and lo
> >
> >
> > Now I guess only need to create one entry per "client" to solve the problem I explained in the previous email but I am wondering if there's a better way to achieve this than creating one instance per user and query pattern to solve my problem of filtering each SQLrelay user to a certain query form ?
> >
> > Thanks in advance,
> >
> > --
> > Matias Rollan
> > <cy...@sa...>
> >
> > On Tue, Apr 13, 2010 at 04:49:59PM -0300, Matias Rollan wrote:
> >
> >> Hola!
> >>
> >> I am new to SQLrelay and I just installed it and connected it to MySQL for testing purposes as I think it may solve my problem.
> >>
> >>
> >> I need to find a way to filter queries that are currently being executed from differents mysql clients so each client can only modify the rows that belong to them and not the others. Right now every client is connecting to a MySQL server with the same user/passwd so I have no control on how to limit each client to certains rows from a table.
> >>
> >> An example to try to clarify this:
> >>
> >> table: M
> >> fields: a,b,c,d,client
> >>
> >> I need that client 1 can only "UPDATE" the rows from table M WHERE client=1 and so on.
> >>
> >> Since MySQL has no permissions per row I think that I can use SQLrelay in the middle, connecting each "client" to SQLrelay to handle the SQL query permissions at the SQLrelay level.
> >>
> >> The clients queries are written the following way:
> >>
> >> "UPDATE M set field=value WHERE client=$client"
> >>
> >> I was thinking that I could create one SQLrelay user per client and only allow the queries with the form written above. Is this possible? Do I have to add one user per client with the query that is allowed for that user?
> >>
> >> Any recommendation would be highly appreciated.
> >>
> >> Thanks in advance,
> >>
> >> --
> >> Matias Rollan
> >> <cy...@sa...>
> >>
|
|
From: Claudio F. <kla...@gm...> - 2010-04-21 15:57:11
|
Ok, attached is the patch to sqlrelay's xml parsing. Let me know if it helps. On Tue, Apr 20, 2010 at 11:14 AM, Claudio Freire <kla...@gm...>wrote: > > > On Wed, Apr 14, 2010 at 10:38 AM, Andre Hübner <and...@gm...>wrote: > >> Hello, >> > My conf is still simple. i created a listener and a connection to mysql >> on >> > other host. the connection to mysql is created and seems to work but >> there >> > is no listener listening on host where sqlrelay is running. also no >> socket >> > is created. >> >> think found the fault. there is an error in original dtd of my >> distribution. >> connections and routers were no optional but required elements. so i had >> to >> insert this empty <routers> tag and this triggered the error. >> > now it is running but there are further problems. >> > > SQLRelay's error reporting on XML parsing problems leaves a lot to be > desired. > I had patched that, I guess I could attach the patch to see if it helps you > diagnose your problem. > It will take some work (separating from other patches), so I'll post again > later. > > |
|
From: David M. <dav...@fi...> - 2010-04-21 15:42:40
|
Hello Matias, I'm glad you got it working. You can have multiple queries per router instance. For instance, you can add another query tag with a pattern for updates to that same router instance, rather than having to create a separate router instance for updates. But you'd need to create a separate instance for each client. I've recently gotten requests for a variety of enhanced routing features, including several user-based routing features. They probably won't make it into the very next release, 0.42, but are on the list for 0.43. David Muse dav...@fi... On 04/14/2010 10:40 AM, Matias Rollan wrote: > Hola! > > I created the following sqlrelay.conf and so far is working as expected. Only filtering select where client_id=2 for the sqlrelay "clientid2" user. > > <?xml version="1.0"?> > <!DOCTYPE instances SYSTEM "sqlrelay.dtd"> > > <instances> > > <!-- This instance maintains connections to a MySQL database --> > <instance id="database" port="" socket="/tmp/database.socket" dbase="mysql" connections="3" maxconnections="15" maxqueuelength="5" growby="1" ttl="60" endofsession="commit" sessiontimeout="600" runasuser="sqlrelay" runasgroup="sqlrelay" cursors="5" authtier="listener" handoff="pass" deniedips="" allowedips="" debug="none" maxquerysize="65536" maxstringbindvaluelength="4000" maxlobbindvaluelength="71680" idleclienttimeout="-1"> > <users> > <user user="mysqluser" password="mysqlpassword"/> > </users> > <connections> > <connection connectionid="db" string="user=mysqluser;password=mysqlpassword;host=localhost;db=database;" metric="1" behindloadbalancer="no"/> > </connections> > </instance> > > > <!-- This instance sends queries to the > mysqldb instance after filtering them --> > <instance id="router" port="9001" socket="/tmp/router.socket" dbase="router" connections="3" maxconnections="15" maxqueuelength="5" growby="1" ttl="60" endofsession="commit" sessiontimeout="600" runasuser="sqlrelay" runasgroup="sqlrelay" cursors="5" authtier="listener" handoff="pass" deniedips="" allowedips="" debug="none" maxquerysize="65536" maxstringbindvaluelength="4000" maxlobbindvaluelength="71680" idleclienttimeout="-1"> > <users> > <user user="clientid2" password="clientid2"/> > </users> > <connections> > </connections> > <router> > <!-- send queries to "mysqldb" which match the > specified patterns --> > <route host="" port="" socket="/tmp/database.socket" user="mysqluser" password="mysqlpassword"> > <!-- allow selects with a where clause --> > <query pattern="^\s*select\s+.*\s+from\s+.*\s+where\s+client_id=2"/> > </route> > <!-- all other queries will be filtered out --> > <filter> > <query pattern=".*"/> > </filter> > </router> > </instance> > > > </instances> > > I only found that that the dbase="router" wasn't working because the "router" value wasn't listed in sqlrelay.dtd . I added it to sqlrelay.dtd file and now is working fine. > > The sqlrelay version that I am currently using is: > ii sqlrelay 1:0.39.4-6ubuntu1 Database connection pooling, proxying and lo > > > Now I guess only need to create one entry per "client" to solve the problem I explained in the previous email but I am wondering if there's a better way to achieve this than creating one instance per user and query pattern to solve my problem of filtering each SQLrelay user to a certain query form ? > > Thanks in advance, > > -- > Matias Rollan > <cy...@sa...> > > On Tue, Apr 13, 2010 at 04:49:59PM -0300, Matias Rollan wrote: > >> Hola! >> >> I am new to SQLrelay and I just installed it and connected it to MySQL for testing purposes as I think it may solve my problem. >> >> >> I need to find a way to filter queries that are currently being executed from differents mysql clients so each client can only modify the rows that belong to them and not the others. Right now every client is connecting to a MySQL server with the same user/passwd so I have no control on how to limit each client to certains rows from a table. >> >> An example to try to clarify this: >> >> table: M >> fields: a,b,c,d,client >> >> I need that client 1 can only "UPDATE" the rows from table M WHERE client=1 and so on. >> >> Since MySQL has no permissions per row I think that I can use SQLrelay in the middle, connecting each "client" to SQLrelay to handle the SQL query permissions at the SQLrelay level. >> >> The clients queries are written the following way: >> >> "UPDATE M set field=value WHERE client=$client" >> >> I was thinking that I could create one SQLrelay user per client and only allow the queries with the form written above. Is this possible? Do I have to add one user per client with the query that is allowed for that user? >> >> Any recommendation would be highly appreciated. >> >> Thanks in advance, >> >> -- >> Matias Rollan >> <cy...@sa...> >> >> >> >> ------------------------------------------------------------------------------ >> Download Intel® Parallel Studio Eval >> Try the new software tools for yourself. Speed compiling, find bugs >> proactively, and fine-tune applications for parallel performance. >> See why Intel Parallel Studio got high marks during beta. >> http://p.sf.net/sfu/intel-sw-dev >> _______________________________________________ >> Sqlrelay-discussion mailing list >> Sql...@li... >> https://lists.sourceforge.net/lists/listinfo/sqlrelay-discussion >> > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > Sqlrelay-discussion mailing list > Sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlrelay-discussion > > > _______________________________________________________ > Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting > http://www.doteasy.com > _______________________________________________________ Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting http://www.doteasy.com |
|
From: Claudio F. <kla...@gm...> - 2010-04-20 14:14:48
|
On Wed, Apr 14, 2010 at 10:38 AM, Andre Hübner <and...@gm...> wrote: > Hello, > > My conf is still simple. i created a listener and a connection to mysql > on > > other host. the connection to mysql is created and seems to work but > there > > is no listener listening on host where sqlrelay is running. also no > socket > > is created. > > think found the fault. there is an error in original dtd of my > distribution. > connections and routers were no optional but required elements. so i had to > insert this empty <routers> tag and this triggered the error. > now it is running but there are further problems. > SQLRelay's error reporting on XML parsing problems leaves a lot to be desired. I had patched that, I guess I could attach the patch to see if it helps you diagnose your problem. It will take some work (separating from other patches), so I'll post again later. |
|
From: Cameron L. <cla...@gm...> - 2010-04-19 21:27:42
|
Has anyone helped you yet? On Wed, Apr 14, 2010 at 8:38 AM, Andre Hübner <and...@gm...> wrote: > Hello, > > My conf is still simple. i created a listener and a connection to mysql > on > > other host. the connection to mysql is created and seems to work but > there > > is no listener listening on host where sqlrelay is running. also no > socket > > is created. > > think found the fault. there is an error in original dtd of my > distribution. > connections and routers were no optional but required elements. so i had to > insert this empty <routers> tag and this triggered the error. > now it is running but there are further problems. > > i do a connect from other host to port 9000 where sqlrelay is running. a > connection is established but nothing more is happening. > i expected to get a answer of the mysqlhost which is connected by sqlrelay > but there seems to be no other traffic. > > What happens here? > Thanks, > Andre > > > > > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > Sqlrelay-discussion mailing list > Sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlrelay-discussion > -- Cameron Laird Phaseit, Inc. +1 281 648 9889 |
|
From: Cal H. <ca...@fb...> - 2010-04-19 19:05:09
|
I think I found my answer: the addresses tag of instance apparently needs to be defined in order for everything to bind to an address? The documentation has " If absent or set to 0.0.0.0, they will bind to all addresses." but that didn't seem to be the functionality in my case. Setting addresses to "0.0.0.0" appears to have fixed the problem. I can connect with sqlrsh and run queries now. My second minor issue, am I calling configure with the wrong syntax? $ ./configure --disable-java --disable-ruby --with-db2-version=9 --with-db2-prefix=/opt/ibm/db2/V9.5/ ... ***** DB2 ********************** configure: WARNING: DB2 support will not be built. db2 version... 9 ****************************** Manually hacking the paths in worked for me, but it would be nice to get autoconf working correctly for future use. Thank you! --Cal On Mon, Apr 19, 2010 at 12:22 PM, Cal Heldenbrand <ca...@fb...> wrote: > Hello, > > I just stumbled onto your project, and I find the feature list of SQL Relay > very exciting. This is the first time I've tried to compile and run it, and > I'm having a problem with the DB2 9.5 client. > > My environment is a mix of CentOS/RHEL 4 and 5, with my current test > machine running RHEL 4, kernel 2.6.9 (x86_64). The DB2 client is 9.5.3, > with server version 9.5.4. I also have a mix of 9.7.x clients, but I > haven't tested those yet. > > To get all of this to compile, I modified your acsite.m4 file and changed > the hard coded DB2 9.1 paths to point at my 9.5 installation, then reran > aclocal/autoconf. Everything appears to have compiled and linked correctly. > > ***** DB2 ********************** > db2 version... 9 > db2 includes... -I/opt/ibm/db2/V9.5/include > db2 libs... -L/opt/ibm/db2/V9.5/lib64 -ldb2 > ****************************** > > > When I try to run the sqlr-start binary, I get this output: > > > ---------------------------------------------------------------------------------------------------------------------------------- > $ ./sqlr-start -id dev > > Starting listener: > sqlr-listener -id dev -config /usr/local/firstworks/etc/sqlrelay.conf > -debug > Debugging to: /usr/local/firstworks/var/sqlrelay/debug/sqlr-listener.9475 > > Starting 5 connections to dev : > sqlr-connection-db2 -id dev -connectionid dev -config > /usr/local/firstworks/etc/sqlrelay.conf -debug > Debugging to: /usr/local/firstworks/var/sqlrelay/debug/sqlr-connection.9477 > sqlr-connection-db2 -id dev -connectionid dev -config > /usr/local/firstworks/etc/sqlrelay.conf -debug > Debugging to: /usr/local/firstworks/var/sqlrelay/debug/sqlr-connection.9479 > sqlr-connection-db2 -id dev -connectionid dev -config > /usr/local/firstworks/etc/sqlrelay.conf -debug > Debugging to: /usr/local/firstworks/var/sqlrelay/debug/sqlr-connection.9481 > sqlr-connection-db2 -id dev -connectionid dev -config > /usr/local/firstworks/etc/sqlrelay.conf -debug > Debugging to: /usr/local/firstworks/var/sqlrelay/debug/sqlr-connection.9483 > sqlr-connection-db2 -id dev -connectionid dev -config > /usr/local/firstworks/etc/sqlrelay.conf -debug > Debugging to: /usr/local/firstworks/var/sqlrelay/debug/sqlr-connection.9485 > > Starting scaler: > sqlr-scaler -id dev -debug -config > /usr/local/firstworks/etc/sqlrelay.conf > > sqlr-scaler error: > The file > /usr/local/firstworks/var/sqlrelay/tmp/pids/sqlr-listener-dev was not found. > This usually means that the sqlr-listener is not running. > The sqlr-listener must be running for the sqlr-scaler to start. > > > sqlr-scaler failed to start. > > ---------------------------------------------------------------------------------------------------------------------------------- > > I have also tried to re-run the sqlr-listener and sqlr-scaler manually, but > it seems I still don't get a connection. > > The debug output also hasn't been too helpful: > > > ---------------------------------------------------------------------------------------------------------------------------------- > $ cat sqlr-listener.9475 > creating shared memory and semaphores > id filename: > /usr/local/firstworks/var/sqlrelay/tmp/ipc/dev > creating shared memory... > creating semaphores... > > ---------------------------------------------------------------------------------------------------------------------------------- > $ cat sqlr-connection.9477 > getting unix socket... > opening /usr/local/firstworks/var/sqlrelay/tmp/sockseq > locking... > got sequence number: 46 > writing new sequence number: 47 > unlocking... > done getting unix socket > attaching to shared memory and semaphores > id filename: > /usr/local/firstworks/var/sqlrelay/tmp/ipc/dev > attaching to shared memory... > attaching to semaphores... > done attaching to shared memory and semaphores > logging in... > done logging in > initializing cursors... > 0 > 1 > 2 > 3 > 4 > done initializing cursors > setting autocommit... > done setting autocommit > incrementing connection count... > acquiring connection count mutex > done acquiring connection count mutex > 1 > signalling scaler to read > done signalling scaler to read > releasing connection count mutex > done releasing connection count mutex > done incrementing connection count > creating /usr/local/firstworks/var/sqlrelay/tmp/ipc/dev-dev > waiting for available database... > database is available > database is available > initializing session... > done initializing session... > announcing availability... > registering for handoff... > handoffsockname: /usr/local/firstworks/var/sqlrelay/tmp/sockets/dev-handoff > trying... > done registering for handoff > acquiring announce mutex > done acquiring announce mutex > handoff=pass > signalling listener to read > done signalling listener to read > waiting for listener > done waiting for listener > releasing announce mutex > done releasing announce mutex > done announcing availability... > waiting for client... > pass failed > done waiting for client > decrementing session count... > acquiring session count mutex > done acquiring session count mutex > -1 > releasing session count mutex > done releasing session count mutex > done decrementing session count > waiting for available database... > database is available > database is available > initializing session... > done initializing session... > announcing availability... > registering for handoff... > handoffsockname: /usr/local/firstworks/var/sqlrelay/tmp/sockets/dev-handoff > trying... > > > ---------------------------------------------------------------------------------------------------------------------------------- > > > My config file is: > > > ---------------------------------------------------------------------------------------------------------------------------------- > <?xml version="1.0"?> > <!DOCTYPE instances SYSTEM "sqlrelay.dtd"> > > <instances> > > <instance id="dev" port="9000" socket="/tmp/db2-dev.socket" > dbase="db2" connections="5" maxconnections="10" > maxqueuelength="0" growby="1" ttl="60" endofsession="commit" > sessiontimeout="60" runasuser="db2user" runasgroup="db2user" > cursors="5" authtier="listener" handoff="pass" > listenertimeout="20" > maxlisteners="20" > debug="listener_and_connection" > > > > <users> > <user user="db2user" password="pass" /> > </users> > <connections> > <connection connectionid="dev" > > string="db=devdb;user=db2user;password=pass;autocommit=yes" > metric="1" behindloadbalancer="no" > /> > </connections> > > </instance> > > </instances> > > > ---------------------------------------------------------------------------------------------------------------------------------- > > And here is the catalog and node information for the "devdb" catalog: > > Database alias = DEVDB > Database name = FLEXPROD > Node name = DEVDB > Database release level = c.00 > Comment = > Directory entry type = Remote > Catalog database partition number = -1 > Alternate server hostname = > Alternate server port number = > > Node name = DEVDB > Comment = > Directory entry type = LOCAL > Protocol = TCPIP > Hostname = 192.168.74.29 > Service name = 51000 > > When I manually try to start up the two failed services, they seem to run, > but output no debugging information. Running the sqlrsh command line client > gives me: > > > ---------------------------------------------------------------------------------------------------------------------------------- > $ sqlrsh -id dev > SQLRShell - Version 0.22 > Connected to: localhost:9000 as db2user > > type help; for a help. > > 0> debug on; > 0> select * from sysibm.sysdummy1; > Result Set Buffer Size: 100 > Connecting to listener... > Unix socket: /tmp/db2-dev.socket > Inet socket: localhost:9000 > Setting Error > Couldn't connect to the listener. > Couldn't connect to the listener. > > Rows Returned : 0 > Fields Returned : 0 > System time : 10000 > > > ---------------------------------------------------------------------------------------------------------------------------------- > > > Thanks for any help, and let me know if you need more information! > > --Cal > > |
|
From: Cal H. <ca...@fb...> - 2010-04-19 17:23:24
|
Hello,
I just stumbled onto your project, and I find the feature list of SQL Relay
very exciting. This is the first time I've tried to compile and run it, and
I'm having a problem with the DB2 9.5 client.
My environment is a mix of CentOS/RHEL 4 and 5, with my current test machine
running RHEL 4, kernel 2.6.9 (x86_64). The DB2 client is 9.5.3, with server
version 9.5.4. I also have a mix of 9.7.x clients, but I haven't tested
those yet.
To get all of this to compile, I modified your acsite.m4 file and changed
the hard coded DB2 9.1 paths to point at my 9.5 installation, then reran
aclocal/autoconf. Everything appears to have compiled and linked correctly.
***** DB2 **********************
db2 version... 9
db2 includes... -I/opt/ibm/db2/V9.5/include
db2 libs... -L/opt/ibm/db2/V9.5/lib64 -ldb2
******************************
When I try to run the sqlr-start binary, I get this output:
----------------------------------------------------------------------------------------------------------------------------------
$ ./sqlr-start -id dev
Starting listener:
sqlr-listener -id dev -config /usr/local/firstworks/etc/sqlrelay.conf
-debug
Debugging to: /usr/local/firstworks/var/sqlrelay/debug/sqlr-listener.9475
Starting 5 connections to dev :
sqlr-connection-db2 -id dev -connectionid dev -config
/usr/local/firstworks/etc/sqlrelay.conf -debug
Debugging to: /usr/local/firstworks/var/sqlrelay/debug/sqlr-connection.9477
sqlr-connection-db2 -id dev -connectionid dev -config
/usr/local/firstworks/etc/sqlrelay.conf -debug
Debugging to: /usr/local/firstworks/var/sqlrelay/debug/sqlr-connection.9479
sqlr-connection-db2 -id dev -connectionid dev -config
/usr/local/firstworks/etc/sqlrelay.conf -debug
Debugging to: /usr/local/firstworks/var/sqlrelay/debug/sqlr-connection.9481
sqlr-connection-db2 -id dev -connectionid dev -config
/usr/local/firstworks/etc/sqlrelay.conf -debug
Debugging to: /usr/local/firstworks/var/sqlrelay/debug/sqlr-connection.9483
sqlr-connection-db2 -id dev -connectionid dev -config
/usr/local/firstworks/etc/sqlrelay.conf -debug
Debugging to: /usr/local/firstworks/var/sqlrelay/debug/sqlr-connection.9485
Starting scaler:
sqlr-scaler -id dev -debug -config
/usr/local/firstworks/etc/sqlrelay.conf
sqlr-scaler error:
The file
/usr/local/firstworks/var/sqlrelay/tmp/pids/sqlr-listener-dev was not found.
This usually means that the sqlr-listener is not running.
The sqlr-listener must be running for the sqlr-scaler to start.
sqlr-scaler failed to start.
----------------------------------------------------------------------------------------------------------------------------------
I have also tried to re-run the sqlr-listener and sqlr-scaler manually, but
it seems I still don't get a connection.
The debug output also hasn't been too helpful:
----------------------------------------------------------------------------------------------------------------------------------
$ cat sqlr-listener.9475
creating shared memory and semaphores
id filename:
/usr/local/firstworks/var/sqlrelay/tmp/ipc/dev
creating shared memory...
creating semaphores...
----------------------------------------------------------------------------------------------------------------------------------
$ cat sqlr-connection.9477
getting unix socket...
opening /usr/local/firstworks/var/sqlrelay/tmp/sockseq
locking...
got sequence number: 46
writing new sequence number: 47
unlocking...
done getting unix socket
attaching to shared memory and semaphores
id filename:
/usr/local/firstworks/var/sqlrelay/tmp/ipc/dev
attaching to shared memory...
attaching to semaphores...
done attaching to shared memory and semaphores
logging in...
done logging in
initializing cursors...
0
1
2
3
4
done initializing cursors
setting autocommit...
done setting autocommit
incrementing connection count...
acquiring connection count mutex
done acquiring connection count mutex
1
signalling scaler to read
done signalling scaler to read
releasing connection count mutex
done releasing connection count mutex
done incrementing connection count
creating /usr/local/firstworks/var/sqlrelay/tmp/ipc/dev-dev
waiting for available database...
database is available
database is available
initializing session...
done initializing session...
announcing availability...
registering for handoff...
handoffsockname: /usr/local/firstworks/var/sqlrelay/tmp/sockets/dev-handoff
trying...
done registering for handoff
acquiring announce mutex
done acquiring announce mutex
handoff=pass
signalling listener to read
done signalling listener to read
waiting for listener
done waiting for listener
releasing announce mutex
done releasing announce mutex
done announcing availability...
waiting for client...
pass failed
done waiting for client
decrementing session count...
acquiring session count mutex
done acquiring session count mutex
-1
releasing session count mutex
done releasing session count mutex
done decrementing session count
waiting for available database...
database is available
database is available
initializing session...
done initializing session...
announcing availability...
registering for handoff...
handoffsockname: /usr/local/firstworks/var/sqlrelay/tmp/sockets/dev-handoff
trying...
----------------------------------------------------------------------------------------------------------------------------------
My config file is:
----------------------------------------------------------------------------------------------------------------------------------
<?xml version="1.0"?>
<!DOCTYPE instances SYSTEM "sqlrelay.dtd">
<instances>
<instance id="dev" port="9000" socket="/tmp/db2-dev.socket"
dbase="db2" connections="5" maxconnections="10"
maxqueuelength="0" growby="1" ttl="60" endofsession="commit"
sessiontimeout="60" runasuser="db2user" runasgroup="db2user"
cursors="5" authtier="listener" handoff="pass"
listenertimeout="20"
maxlisteners="20"
debug="listener_and_connection"
>
<users>
<user user="db2user" password="pass" />
</users>
<connections>
<connection connectionid="dev"
string="db=devdb;user=db2user;password=pass;autocommit=yes"
metric="1" behindloadbalancer="no"
/>
</connections>
</instance>
</instances>
----------------------------------------------------------------------------------------------------------------------------------
And here is the catalog and node information for the "devdb" catalog:
Database alias = DEVDB
Database name = FLEXPROD
Node name = DEVDB
Database release level = c.00
Comment =
Directory entry type = Remote
Catalog database partition number = -1
Alternate server hostname =
Alternate server port number =
Node name = DEVDB
Comment =
Directory entry type = LOCAL
Protocol = TCPIP
Hostname = 192.168.74.29
Service name = 51000
When I manually try to start up the two failed services, they seem to run,
but output no debugging information. Running the sqlrsh command line client
gives me:
----------------------------------------------------------------------------------------------------------------------------------
$ sqlrsh -id dev
SQLRShell - Version 0.22
Connected to: localhost:9000 as db2user
type help; for a help.
0> debug on;
0> select * from sysibm.sysdummy1;
Result Set Buffer Size: 100
Connecting to listener...
Unix socket: /tmp/db2-dev.socket
Inet socket: localhost:9000
Setting Error
Couldn't connect to the listener.
Couldn't connect to the listener.
Rows Returned : 0
Fields Returned : 0
System time : 10000
----------------------------------------------------------------------------------------------------------------------------------
Thanks for any help, and let me know if you need more information!
--Cal
|
|
From: David M. <dav...@fi...> - 2010-04-15 14:26:58
|
Ahhh, that makes sense. Unfortunately that fix will probably break the 32 bit build. I'll update the code to work on both platforms. Thanks for figuring that out though :) Dave dav...@fi... On 04/15/2010 05:29 AM, seong hoon park wrote: > I found a bug for myself :) > > In "com_firstworks_sqlrelay_SQLRConnection.C" , replacing > env->GetIntField to env->GetLongField solves the problem. > > CAUSE: > When GetIntField() returns a value which is '32bit negative number' > It becames a '64bit negative number' through a type casting to > "sqlrconnection *" . > > but GetLongField() always returns 64bit positive number in 64bit enviroment. > > > > On Wed, Apr 14, 2010 at 2:36 PM, seong hoon park<seo...@gm...> wrote: > >> HI~ >> >> In x86_64 linux , java client is crashed with SIGSEGV >> >> - sqlrelay version : 0.41 , rudiments version : 0.32 ( but same in >> older version ) >> - jdk : both openjdk 1.6 and SUN JDK 1.6 >> - OS : RHEL , CENTOS >> - In 32bit machine , there is no problem. >> >> >> REPRODUCING: >> >> $ java MyClass.java >> >> /* after some output */ >> >> # >> # An unexpected error has been detected by Java Runtime Environment: >> # >> # SIGSEGV (0xb) at pc=0x00002aaaf42d08f1, pid=18442, tid=1087682880 >> # >> # Java VM: OpenJDK 64-Bit Server VM (1.6.0-b09 mixed mode linux-amd64) >> # Problematic frame: >> # C [libsqlrclient-0.41.so.1+0x128f1] _ZN10sqlrcursor12prepareQueryEPKcj+0x17 >> # >> # An error report file with more information is saved as: >> # /neiz/cpool4/java/hs_err_pid18442.log >> # >> # If you would like to submit a bug report, please visit: >> # http://icedtea.classpath.org/bugzilla >> # The crash happened outside the Java Virtual Machine in native code. >> # See problematic frame for where to report the bug. >> # >> Aborted >> >> >> >> >> - MyClass.java is a simple program which just fetches and prints . >> - The error is occured in more high frequency when executing the >> program repeatedly than just executing once . >> >> /* MyClass.java */ >> >> import com.firstworks.sqlrelay.*; >> import java.io.*; >> >> >> public class MyClass { >> public static void main(String[] arg) { >> int i; >> >> >> for(i=0;i<10000;i++) >> { >> >> System.out.printf("%d\n",i); >> SQLRConnection con=new >> SQLRConnection("sqlr.ssc.",(short)2001,"","user","pass",0,1); >> con.delete(); >> >> SQLRCursor cur=new SQLRCursor(con); >> >> if(!cur.sendQuery("select * from tab")) >> { >> System.out.println(cur.errorMessage()); >> } >> con.endSession(); >> >> for (int row=0; row<cur.rowCount(); row++) { >> String[] rowarray=cur.getRow(row); >> for (int col=0; col<cur.colCount(); col++) { >> System.out.println(rowarray[col] + ","); >> } >> System.out.println(); >> } >> >> cur.delete(); >> >> } >> >> } >> } >> >> > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > Sqlrelay-discussion mailing list > Sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlrelay-discussion > > _______________________________________________________ Unlimited Disk, Data Transfer, PHP/MySQL Domain Hosting http://www.doteasy.com |
|
From: seong h. p. <seo...@gm...> - 2010-04-15 09:29:11
|
I found a bug for myself :) In "com_firstworks_sqlrelay_SQLRConnection.C" , replacing env->GetIntField to env->GetLongField solves the problem. CAUSE: When GetIntField() returns a value which is '32bit negative number' It becames a '64bit negative number' through a type casting to "sqlrconnection *" . but GetLongField() always returns 64bit positive number in 64bit enviroment. On Wed, Apr 14, 2010 at 2:36 PM, seong hoon park <seo...@gm...> wrote: > HI~ > > In x86_64 linux , java client is crashed with SIGSEGV > > - sqlrelay version : 0.41 , rudiments version : 0.32 ( but same in > older version ) > - jdk : both openjdk 1.6 and SUN JDK 1.6 > - OS : RHEL , CENTOS > - In 32bit machine , there is no problem. > > > REPRODUCING: > > $ java MyClass.java > > /* after some output */ > > # > # An unexpected error has been detected by Java Runtime Environment: > # > # SIGSEGV (0xb) at pc=0x00002aaaf42d08f1, pid=18442, tid=1087682880 > # > # Java VM: OpenJDK 64-Bit Server VM (1.6.0-b09 mixed mode linux-amd64) > # Problematic frame: > # C [libsqlrclient-0.41.so.1+0x128f1] _ZN10sqlrcursor12prepareQueryEPKcj+0x17 > # > # An error report file with more information is saved as: > # /neiz/cpool4/java/hs_err_pid18442.log > # > # If you would like to submit a bug report, please visit: > # http://icedtea.classpath.org/bugzilla > # The crash happened outside the Java Virtual Machine in native code. > # See problematic frame for where to report the bug. > # > Aborted > > > > > - MyClass.java is a simple program which just fetches and prints . > - The error is occured in more high frequency when executing the > program repeatedly than just executing once . > > /* MyClass.java */ > > import com.firstworks.sqlrelay.*; > import java.io.*; > > > public class MyClass { > public static void main(String[] arg) { > int i; > > > for(i=0;i<10000;i++) > { > > System.out.printf("%d\n",i); > SQLRConnection con=new > SQLRConnection("sqlr.ssc.",(short)2001,"","user","pass",0,1); > con.delete(); > > SQLRCursor cur=new SQLRCursor(con); > > if(!cur.sendQuery("select * from tab")) > { > System.out.println(cur.errorMessage()); > } > con.endSession(); > > for (int row=0; row<cur.rowCount(); row++) { > String[] rowarray=cur.getRow(row); > for (int col=0; col<cur.colCount(); col++) { > System.out.println(rowarray[col] + ","); > } > System.out.println(); > } > > cur.delete(); > > } > > } > } > |