|
From: Vladimir S. <vst...@gm...> - 2012-07-31 08:19:53
|
> All the nodes in rac are replicated. Is the same true for mysql cluster? Would You like to say that only XC is wrtie scalable? > There are several cons against that: > - it is not possible to define a distribution key based on a column I believe, some other methods to make decision where to store new incoming data, exists or may be created. At least round-robin. Other is on LB criteria based : You choose node under least load. > - it is not possible to define range partitioning, column partitioning Is it so necessary for cluster solutions with distributed databases? > - the list of nodes is still needed in CREATE TABLE In this case when we need add new data node, we should apply CREATE/DROP/RENAME technique to every distributed table. But this is almost equivalent to creating cluster from scratch. Indeed, it is better create dump, drop database and restore it from backup. So it look like XC is not XC, i.e is not extensible. That is why I think, all storage control should be moved to cluster level. > It is written in XC definition that it is a synchronous multi-master. > Doing that in asynchronous way would break that, and also this way you No! You didn't read carefully what I wrote. We have classic distributed XC as core of our system. It contains all complete data at every moment and it is write scalable synchronous multi-master as usu al. But then we can supplement it with extra replicated nodes, that will be updated asynchronously in low priority background process in order to keep cluster remaining write scalable. When read request come in, it should go to replicated node if and only if requested data exists there, otherwise such request should go to distributed node where those data in question exists in any case. >> Such architect allow to create totally automated and complete LB & HA >> cluster without any third party helpers. If one of the distributed >> (shark) nodes fails, it should be automatically replaced (failover) >> with one of the up to date replicated nodes. > > This can be also achieved with postgres streaming replication naturally > available in XC. Certainly You mean postgres standby server as method of duplicating distributed node. We have already discussed this topic: it is one kind of number of external HA solutions. But I wrote above something else. I mean here that existing replicated node, that currently serve read requests from application, can take over the role of any distributed node in case it fail. And I suppose this failover procedure should be automated, started on event of failure and executed in real time. OK, I see all I wrote here in this thread is far from current XC state as well as from Your thoughts at all. So, You may consider all this as my unreachable dreams. |