Is it for moving data when executing UPDATE on distributed partition? I don't know what's the aim of this design spec.
I cannot understand what do you mean by
- "without key process is almost serial execution that split XASL at start server execution and each server will execute each sub_XASL. with key process is both of serial and parallel."
Do you mean "without key" - global table / "with key" - distributed partition?
- master node send list file to master node for sending to cas. -> master to master?? You may have a mistake in word.
Anyway, your design is too complicate so that I cannot catch up your main idea.
Could you explain your idea in sentences first?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
For add/remove partition in a local partitioned table, it will call update statement (such as : update Ltable1 set id = id, id is partition key) to redistribute data in all partitions.
This statement is a update statement which updated the partition key column.
In beta we need support alter global hash table(add/remove partition table)
The same as local partition table, it will call UPDATE statement(update gtable1 set id = id, id is partition key) to redistribute data in partitions.
In M2, we only support Update statement which will not update the partition key column.
So in beta, we need support Update statement which will update the partition key column.
I think these are the words that songmei said :
1. "without key" : update statement which will not update partition key column.
2. "with key" : update statement which will update partition key column.
The 2 kinds of update statement is different in CUBRID Cluster. The first kind of update will not led data distribution, data will update in each server. The second kind of update will led data delete from one server and insert into another server.
In the design doc, songmei gives one of solution for the second of update statement.
we will prepare another solution and we want to explain to you in Tuesday''s VTC meeting.
Thanks,
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi, park
according to previous VTC meeting, you said we can lock the class which will be selected. but the lock just act to different transaction and in update process, select and insert in belong to one transaction.
and after our discuss of M3's scope, there's no update key feature. there's just remove partition feature. for the detail, original there're two partition in two node, and now, we want to add one partition. so these data need to remove partition.
for this feature, i think we don't need to lock any class to update if we do not think about the performance issue.
so, my solution for the M3's update is that: each node execute update severally without lock.
how about it?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
bellow URL is update process with key(update class key) design after our discussion.
please review it.
thanks
url:
https://sourceforge.net/apps/trac/cubridcluster/wiki/RemoteUpdateService
What's the related Ticket numbers?
Is it for moving data when executing UPDATE on distributed partition? I don't know what's the aim of this design spec.
I cannot understand what do you mean by
- "without key process is almost serial execution that split XASL at start server execution and each server will execute each sub_XASL. with key process is both of serial and parallel."
Do you mean "without key" - global table / "with key" - distributed partition?
- master node send list file to master node for sending to cas. -> master to master?? You may have a mistake in word.
Anyway, your design is too complicate so that I cannot catch up your main idea.
Could you explain your idea in sentences first?
Hi Park,
For add/remove partition in a local partitioned table, it will call update statement (such as : update Ltable1 set id = id, id is partition key) to redistribute data in all partitions.
This statement is a update statement which updated the partition key column.
In beta we need support alter global hash table(add/remove partition table)
The same as local partition table, it will call UPDATE statement(update gtable1 set id = id, id is partition key) to redistribute data in partitions.
In M2, we only support Update statement which will not update the partition key column.
So in beta, we need support Update statement which will update the partition key column.
I think these are the words that songmei said :
1. "without key" : update statement which will not update partition key column.
2. "with key" : update statement which will update partition key column.
The 2 kinds of update statement is different in CUBRID Cluster. The first kind of update will not led data distribution, data will update in each server. The second kind of update will led data delete from one server and insert into another server.
In the design doc, songmei gives one of solution for the second of update statement.
we will prepare another solution and we want to explain to you in Tuesday''s VTC meeting.
Thanks,
Thank you 1dong.
This design discussion is for the tickets #493 and #494. Ok, I've got the context.
hi, park
I've updated new solution for this process.
let's talk about this at today's meeting.
url:
https://cubridcluster.svn.sourceforge.net/svnroot/cubridcluster/wiki/trac/htdocs/overall_execution_framework/update_with_key2.JPG
ok, I've updated update process.
Hi, park
according to previous VTC meeting, you said we can lock the class which will be selected. but the lock just act to different transaction and in update process, select and insert in belong to one transaction.
and after our discuss of M3's scope, there's no update key feature. there's just remove partition feature. for the detail, original there're two partition in two node, and now, we want to add one partition. so these data need to remove partition.
for this feature, i think we don't need to lock any class to update if we do not think about the performance issue.
so, my solution for the M3's update is that: each node execute update severally without lock.
how about it?