You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(36) |
Jun
(144) |
Jul
(125) |
Aug
(283) |
Sep
(130) |
Oct
(115) |
Nov
(119) |
Dec
(120) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(126) |
Feb
(5) |
Mar
|
Apr
(27) |
May
(15) |
Jun
(8) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: cubridcluster <no...@so...> - 2011-06-14 05:30:19
|
#451: [TASK] CUBRID Cluster Performance Turning ------------------------------------------------------------+--------------- Reporter: wzcsoft | Owner: cubridcluster-triage Type: defect | Status: closed Priority: major | Milestone: M3 - beta Component: cubrid (all) | Version: Cluster 2010 R1 Severity: unclear | Resolution: fixed Keywords: #451 [TASK] CUBRID Cluster Performance Turning | ------------------------------------------------------------+--------------- Changes (by dong1): * status: new => closed * resolution: => fixed Comment: 1 gethostbyname() issue has been fixed. wzcsoft has checked in the code. 2 is hard to refactory. we will consider it later. 3 global transaction refactory has finished. now cas connect server directly. The issue can be closed. -- Ticket URL: <http://sourceforge.net/apps/trac/cubridcluster/ticket/451#comment:3> cubridcluster <http://sourceforge.net/projects/cubridcluster/> SF-project cubridcluster |
From: cubridcluster <no...@so...> - 2011-06-14 05:26:37
|
#234: [Task]Refactory replicate schema when a new node join in. --------------------------+------------------------------------------------- Reporter: miaoyu | Owner: cubridcluster-triage Type: task | Status: closed Priority: minor | Milestone: M3 - beta Component: cubrid (all) | Version: Cluster 2010 R1 Severity: unclear | Resolution: duplicate Keywords: | --------------------------+------------------------------------------------- Changes (by dong1): * status: new => closed * resolution: => duplicate Comment: there are new ticket to track this issue. see #490. it can be closed. -- Ticket URL: <http://sourceforge.net/apps/trac/cubridcluster/ticket/234#comment:1> cubridcluster <http://sourceforge.net/projects/cubridcluster/> SF-project cubridcluster |
From: cubridcluster <no...@so...> - 2011-06-14 05:20:23
|
#220: [FR]Global Transaction -------------------------------------+-------------------------------------- Reporter: miaoyu | Owner: miaoyu Type: task | Status: closed Priority: major | Milestone: M3 - beta Component: cubrid (all) | Version: Cluster 2010 R1 Severity: unclear | Resolution: fixed Keywords: #220 Global Transaction | -------------------------------------+-------------------------------------- Changes (by dong1): * status: assigned => closed * resolution: => fixed Comment: The total feature has been implemented. It can be closed. -- Ticket URL: <http://sourceforge.net/apps/trac/cubridcluster/ticket/220#comment:4> cubridcluster <http://sourceforge.net/projects/cubridcluster/> SF-project cubridcluster |
From: cubridcluster <no...@so...> - 2011-06-14 05:19:52
|
#218: [FR]Schema of constraint -- Foreign key --------------------------+------------------------------------------------- Reporter: miaoyu | Owner: cubridcluster-triage Type: task | Status: assigned Priority: major | Milestone: M3 - beta Component: cubrid (all) | Version: Cluster 2010 R1 Severity: unclear | Keywords: #218 Schema of constraint -- Foreign key --------------------------+------------------------------------------------- Changes (by dong1): * owner: miaoyu => cubridcluster-triage Comment: whether to support foreign key in M3 still need be disscussed. -- Ticket URL: <http://sourceforge.net/apps/trac/cubridcluster/ticket/218#comment:6> cubridcluster <http://sourceforge.net/projects/cubridcluster/> SF-project cubridcluster |
From: cubridcluster <no...@so...> - 2011-06-14 05:13:13
|
#503: [Task]Global transaction refactory -------------------------------------+-------------------------------------- Reporter: miaoyu | Owner: miaoyu Type: task | Status: closed Priority: major | Milestone: M3 - beta Component: cubrid (all) | Version: Cluster 2010 R1 Severity: unclear | Resolution: fixed Keywords: #220 Global Transaction | -------------------------------------+-------------------------------------- Changes (by dong1): * status: accepted => closed * resolution: => fixed Comment: The task has bee finished see #504 code review. it can be closed. -- Ticket URL: <http://sourceforge.net/apps/trac/cubridcluster/ticket/503#comment:2> cubridcluster <http://sourceforge.net/projects/cubridcluster/> SF-project cubridcluster |
From: cubridcluster <no...@so...> - 2011-06-14 05:11:34
|
#377: [Task]Implement internal savepoint in cluster -------------------------------------+-------------------------------------- Reporter: miaoyu | Owner: cubridcluster-triage Type: task | Status: closed Priority: major | Milestone: M3 - beta Component: cubrid (all) | Version: Cluster 2010 R1 Severity: unclear | Resolution: duplicate Keywords: #220 Global Transaction | -------------------------------------+-------------------------------------- Changes (by dong1): * status: assigned => closed * resolution: => duplicate -- Ticket URL: <http://sourceforge.net/apps/trac/cubridcluster/ticket/377#comment:5> cubridcluster <http://sourceforge.net/projects/cubridcluster/> SF-project cubridcluster |
From: cubridcluster <no...@so...> - 2011-06-14 05:10:37
|
#308: [Task]Support TRANSACTION ISOLATION and LOCK TIMEOUT setting in the global transcation -------------------------------------+-------------------------------------- Reporter: miaoyu | Owner: cubridcluster-triage Type: task | Status: closed Priority: major | Milestone: M3 - beta Component: cubrid (all) | Version: Cluster 2010 R1 Severity: unclear | Resolution: duplicate Keywords: #220 Global Transaction | -------------------------------------+-------------------------------------- Changes (by dong1): * status: assigned => closed * resolution: => duplicate -- Ticket URL: <http://sourceforge.net/apps/trac/cubridcluster/ticket/308#comment:5> cubridcluster <http://sourceforge.net/projects/cubridcluster/> SF-project cubridcluster |
From: cubridcluster <no...@so...> - 2011-06-02 01:18:45
|
#368: [TASK] Improve the lock in xserial_get_next_value() --------------------------+------------------------------------------------- Reporter: wzcsoft | Owner: cubridcluster-triage Type: defect | Status: assigned Priority: major | Milestone: Future Release Component: cubrid (all) | Version: Cluster 2010 R1 Severity: unclear | Keywords: #368 Improve the lock in xserial_get_next_value() --------------------------+------------------------------------------------- Changes (by dong1): * owner: => cubridcluster-triage * status: new => assigned * milestone: M3 - beta => Future Release Comment: I suggest the remove the issue in later. -- Ticket URL: <http://sourceforge.net/apps/trac/cubridcluster/ticket/368#comment:3> cubridcluster <http://sourceforge.net/projects/cubridcluster/> SF-project cubridcluster |
From: SourceForge.net <no...@so...> - 2011-05-24 08:38:25
|
The following forum message was posted by dong1 at http://sourceforge.net/projects/cubridcluster/forums/forum/1150693/topic/4542991: 2011-05-24 16:36:53 CST I want to add discription about it. Li Lin suggest , when drop global user, we add to delete the rows in _db_auth. such as when drop Global_user1. Grantor grantee class_of auth_type is_grantable User1 user2 xx select 1 User1 Global_user1 xx update 1 Global_user1 user3 xx update 0 we just delete the rows which grantee is Global_user1. we do not delete the rows which Grantor is Global_user1. so, after drop Global_user1. The _db_auth will be like: Grantor grantee class_of auth_type is_grantable User1 user2 xx select 1 Global_user1 user3 xx update 0 drop local user is not changed. |
From: SourceForge.net <no...@so...> - 2011-05-24 08:36:58
|
The following forum message was posted by dong1 at http://sourceforge.net/projects/cubridcluster/forums/forum/1150693/topic/4542991: I want to add discription about it. Li Lin suggest , when drop [b]global[/b] user, we add to delete the rows in _db_auth. such as when drop Global_user1. [code] Grantor grantee class_of auth_type is_grantable User1 user2 xx select 1 User1 Global_user1 xx update 1 Global_user1 user3 xx update 0 [/code] we just delete the rows which grantee is Global_user1. we do not delete the rows which Grantor is Global_user1. so, after drop Global_user1. The _db_auth will be like: [code] Grantor grantee class_of auth_type is_grantable User1 user2 xx select 1 Global_user1 user3 xx update 0 [/code] drop local user is not changed. |
From: SourceForge.net <no...@so...> - 2011-05-24 07:32:08
|
The following forum message was posted by linli54 at http://sourceforge.net/projects/cubridcluster/forums/forum/1150693/topic/4542991: Ok,Thanks. |
From: SourceForge.net <no...@so...> - 2011-05-24 07:25:16
|
The following forum message was posted by iamyaw at http://sourceforge.net/projects/cubridcluster/forums/forum/1150693/topic/4542991: How can I have another and better solution? Go as you planned if your team discussed enough about this issue. |
From: SourceForge.net <no...@so...> - 2011-05-24 07:20:04
|
The following forum message was posted by linli54 at http://sourceforge.net/projects/cubridcluster/forums/forum/1150693/topic/4542991: Now the work is to clean the authorization information when dropping a global user,so later creating a new global user with the same oid will have no authorizations. My solutions is to delete it directly, and do you have another solution? Thanks. |
From: SourceForge.net <no...@so...> - 2011-05-24 06:37:04
|
The following forum message was posted by iamyaw at http://sourceforge.net/projects/cubridcluster/forums/forum/1150693/topic/4542991: Ok... I can understand what happens when the global user is dropped.. but sorry I cannot understand what you want to do. Do you mean that the current CUBRID does not delete the related _db_auth entries when dropping a (local) user, and it is defect(bug)..., right? Thus, are you suggesting to fix the bug for the CUBRID Cluster version? If you are, it is absolutely good. Thanks. |
From: SourceForge.net <no...@so...> - 2011-05-24 06:19:28
|
The following forum message was posted by linli54 at http://sourceforge.net/projects/cubridcluster/forums/forum/1150693/topic/4542991: Hi Park, There is a defect in cubrid: when drop a user, dont do anything about authorizations, so in tables _db_auth , db_auth, db_authorization, there will have authorization information about the deleted users. Its ok in cubrid . In cluster ,because global users will reuse the same oids, drop a global user just change its name, and it will be used later, so old authorization information will have influence when a new global user use the same oid . Like this: local user1 ,user2 and user3, user1 create a table xx, and grant user2 authorizations like select and update with grant option, so user2 can grant select and update authorizations to user3 ,it just grant update. Now in _db_auth, is like: Grantor grantee class_of auth_type is_grantable User1 user2 xx select 1 User1 user2 xx update 1 User2 user3 xx update 0 When drop user2,_db_auth will like : Grantor grantee class_of auth_type is_grantable User1 user2 xx select 1 User1 null xx update 1 null user3 xx update 0 its be ok when users are local just as in cubrid. But in cluster ,for the reserved global user pool solution, when drop a global user ,just change its name ,oid is same ,so above will like: Grantor grantee class_of auth_type is_grantable User1 user2 xx select 1 User1 Reserved_name xx update 1 Reserved_name user3 xx update 0 Attation:use user name for demonstration, in cluster ,will be oids. So something must be done about authorization. So as global users, when drop one, will clean the authorization information, making the global user oid freed dont have any authorizations. That is that the oid of the deleting global user dont be grantee attribute in _db_auth, db_auth, db_authorization. I think may achieving it by a delete operation, it will compile in one node and execute in all nodes. How do you think about it, and do you have better solution? |
From: SourceForge.net <no...@so...> - 2011-05-13 08:05:25
|
The following forum message was posted by tzliang at http://sourceforge.net/projects/cubridcluster/forums/forum/1150693/topic/4527497: The lock_timeout_in_secs PRM should be set in cubrid.conf. |
From: SourceForge.net <no...@so...> - 2011-05-13 08:00:57
|
The following forum message was posted by tzliang at http://sourceforge.net/projects/cubridcluster/forums/forum/1150693/topic/4527497: Set lock_timeout_in_secs=120 and try again. Following the steps as pre-test, and after 2 mins, the node1 will get a ERROR message: In the command from line 1, ERROR: Your transaction (index 2, (unknown)@(unknown)|0) timed out waiting on X_LOCK lock on class gt1 because of deadlock. You are waiting for user(s) to finish. 0 command(s) successfully processed. It means node1 returned with do nothing, and node2 get the lock and execute correctly. |
From: SourceForge.net <no...@so...> - 2011-05-12 13:08:41
|
The following forum message was posted by tzliang at http://sourceforge.net/projects/cubridcluster/forums/forum/1150693/topic/4527497: At first, add 2 line at line 12018 and 12019 in schema_manager.c as: 12018 printf ("wait untill lock local class\n"); 12019 sleep (5); The deadlock scenario is as following: step 1:create a global table gt1 in node1 create global table gt1(id int) on node 'node1'; step 2:execute a alter command in node1 ALTER GLOBAL TABLE gt1 ADD COLUMN col_1 varchar; it will sleep 5 seconds in function lockhint_subclasses. and then, execute another alter command in node2 before the node1 waking up. ALTER GLOBAL TABLE gt1 ADD COLUMN col_2 varchar; then 2 process in node1 and node2 will deadlock. the process in node1 deadlocked in lockhint_subclasses function, that means it has lock the real class mop in local server, and want to lock the remote proxy class in node2. the process in node2 deadlocked in locator_lock function, that means it has lock the proxy class mop in local server, and want to lock the remote real class in node1. |
From: SourceForge.net <no...@so...> - 2011-05-12 01:57:19
|
The following forum message was posted by linli54 at http://sourceforge.net/projects/cubridcluster/forums/forum/1150693/topic/4526716: Hi Park, There is a question when drop a global user. As we determined before, a global user(group ) can has local users as its members, like global{global, local}, so a same global user can has different local users in different nodes, as in node1,a global user gu1 has members global user gu2 and local users lu1、lu2, as [b]gu1{gu2,lu1,lu2}[/b]; and in node2,the global user gu1 may has members global user gu3 and local users lu3、lu4,as [b]gu1{gu3,lu3,lu4}[/b], and local users lu1、lu2、lu3、lu4 may have same names or same oids . When drop a global user, will adjust its attributes direct-groups and groups. Adjust attribute direct-groups is used a update sql clause, it will direct delete the global user ,as in gu1{gu2,lu1,lu2},will delete gu1 in the attribute direct-groups of gu2、lu1 and lu2, it’s ok; but adjusting the groups attribute will select all the members of its adjusted direct-groups attribute set, it will get local users like above is lu1 and lu2 , so will conflict with local users in the working cas to lu3 and lu4. In our opinion, in the group (member) relation, should separate global user and local user, like [b]global{global} [/b],[b]local{local}[/b], it will demonstrate the same scenario in all the nodes, operations on global users will be same in all the nodes. As the authorizations between global user and local users, it will not use group(member) relations to transfer authorizations ,will use grant and revoke statement directly. Doing like above will have restriction on create statement and add_member method. Do you think about it? |
From: SourceForge.net <no...@so...> - 2011-05-11 09:43:54
|
The following forum message was posted by wzcsoft at http://sourceforge.net/projects/cubridcluster/forums/forum/1150693/topic/4525760: Hi Park, Now I will implement this feature: when new node join cluster, synchronize the existed global schema to the new node; when a cluster node leave the cluster, clean global schema on the node. I want to talk my basic ideas about the feature to you and listen to your advice. In fact, we have discussed the topic before, at here[url]https://sourceforge.net/projects/cubridcluster/forums/forum/1150693/top ic/3894688[/url]. My basic idea is our conclusion: Schema synchronization will be implemented in the "REGISTER/UNREGISTER" commands. When user execute "REGISTER" command, add the node information into the system catalog table then do synchronization. When execute "UNREGISTER" command, remove node information and schema information. Can I do the next steps follow the basic idea? Thanks. |
From: SourceForge.net <no...@so...> - 2011-05-11 09:02:46
|
The following forum message was posted by piaosongmei at http://sourceforge.net/projects/cubridcluster/forums/forum/1150693/topic/4493565: Hi, park according to previous VTC meeting, you said we can lock the class which will be selected. but the lock just act to different transaction and in update process, select and insert in belong to one transaction. and after our discuss of M3's scope, there's no update key feature. there's just remove partition feature. for the detail, original there're two partition in two node, and now, we want to add one partition. so these data need to remove partition. for this feature, i think we don't need to lock any class to update if we do not think about the performance issue. so, my solution for the M3's update is that: each node execute update severally without lock. how about it? |
From: SourceForge.net <no...@so...> - 2011-05-06 03:43:54
|
The following forum message was posted by tzliang at http://sourceforge.net/projects/cubridcluster/forums/forum/1150693/topic/4504443: The timeout in s2s communication is that when the connection pooling receive a new connection request, but it already whole occupied by others, and this new request has to wait. So, now we need define this timeout. |
From: SourceForge.net <no...@so...> - 2011-05-03 02:42:13
|
The following forum message was posted by iamyaw at http://sourceforge.net/projects/cubridcluster/forums/forum/1150693/topic/4504443: Sorry for late response. I am not sure why the timeout feature is necessary regarding s2s communication? Is there anyone who has opinion about this? I suggest you that don't add the timeout feature to M3 item before the necessity is clearly proved and agreed with all others. Thanks. |
From: SourceForge.net <no...@so...> - 2011-04-28 10:30:09
|
The following forum message was posted by piaosongmei at http://sourceforge.net/projects/cubridcluster/forums/forum/1150693/topic/4493565: ok, I've updated update process. |
From: SourceForge.net <no...@so...> - 2011-04-26 08:56:23
|
The following forum message was posted by tzliang at http://sourceforge.net/projects/cubridcluster/forums/forum/1150693/topic/4504443: Hi, park, there isn't any timeout in s2s communication currently. Now, it needs to be added in M3, that means, when the defined time is out, the s2s conn will be free. And I have a proposal as following: 1. Allow the user to define the s2s communication timeout in cubrid.conf as max_s2s_conn_wait_time. 2. Define the default wait time in s2s communication with 2 minites. How do you think about that? Please give me your opinion. thank you~ |