|
From: Koichi S. <koi...@gm...> - 2014-05-27 01:59:03
|
Year, this is what needed now. I think we should make some more discussion on it to get the best approach. I understood that streaming replication is not for this usecase and we need something better. As Josh suggested, I'd like to have much more idea/requirement on this. Thank you; --- Koichi Suzuki 2014-05-27 9:10 GMT+09:00 Tim Uckun <tim...@gm...>: > Would it be possible to keep the shards in multiple data nodes so that if > one data node failed you'd just replace it when you can get around to it. > > Elasticsearch uses this strategy. > > > On Sun, May 25, 2014 at 8:04 AM, Koichi Suzuki <koi...@gm...> > wrote: >> >> At present, XC advises to make a replica with synchronize replication. >> Pgxc_ctl configures slaves in this way. >> >> I understand that this is not for performance and we may need some >> other solution for this. >> >> To begin with, there are a couple of ideas for this. >> >> 1. To allow async., when a node fails, fall back whole cluster status >> to the latest consistent state, such as pointed by a barrier. I can >> provide some detailed thought on this if interesting. >> >> 2. Allow to have a copy of shards to another node at planner/executor >> level. >> >> 3. Implement another replication better for XC using BDR, just for >> distributed tables, for example. >> >> At present, XC uses hash value of the node name to determine each row >> location for distributed tables. For ideas 2 and 3, we need to add >> some infrastructure to make this allocation more flexible. >> >> Further input is welcome. >> >> Thank you. >> --- >> Koichi Suzuki >> >> >> 2014-05-24 14:53 GMT-04:00 Josh Berkus <jo...@ag...>: >> > All: >> > >> > So, in addition to the stability issues raised at the PostgresXC summit, >> > I need to raise something which is a deficiency of both XC and XL and >> > should be (in my opinion) our #2 priority after stability. And that's >> > node/shard redundancy. >> > >> > Right now, if single node fails, the cluster is frozen for writes ... >> > and fails some reads ... until the node is replaced by the user from a >> > replica. It's also not clear that we *can* actually replace a node from >> > a replica because the replica will be async rep, and thus not at exactly >> > the same GXID as the rest of the cluster. This makes XC a >> > low-availability solution. >> > >> > The answer for this is to do the same thing which every other clustering >> > system has done: write each shard to multiple locations. Default would >> > be two. If each shard is present on two different nodes, then losing a >> > node is just a performance problem, not a downtime event. >> > >> > Thoughts? >> > >> > -- >> > Josh Berkus >> > PostgreSQL Experts Inc. >> > http://pgexperts.com >> > >> > >> > ------------------------------------------------------------------------------ >> > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE >> > Instantly run your Selenium tests across 300+ browser/OS combos. >> > Get unparalleled scalability from the best Selenium testing platform >> > available >> > Simple to use. Nothing to install. Get started now for free." >> > http://p.sf.net/sfu/SauceLabs >> > _______________________________________________ >> > Postgres-xc-general mailing list >> > Pos...@li... >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> >> ------------------------------------------------------------------------------ >> "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE >> Instantly run your Selenium tests across 300+ browser/OS combos. >> Get unparalleled scalability from the best Selenium testing platform >> available >> Simple to use. Nothing to install. Get started now for free." >> http://p.sf.net/sfu/SauceLabs >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |