|
From: Vladimir S. <vst...@gm...> - 2012-10-25 00:40:33
|
On Wed, Oct 24, 2012 at 11:27:25PM +0100, Paulo Pires wrote: > FYI there is technology that deprecates the need of rebooting a machine > following a kernel update, such as ksplice (bought by Oracle a couple > years ago). There is such debian package but it is not commonly used. > I believe you can add new machinery (new coordinators, new data-nodes) > and deprecate old hardware. Am I being to simplistic thinking this way? > Anyway, changing a cluster hardware every two years seems overkill to > me. But of course, it depends on your app growth We don't speak about upgrade here, it is about scalability, do you remember? > Yes, internal is (supposedly) easier or as you say "transparent" - I'd > use the word "seamless". But you'll need to learn it and take care of it > somehow, the same way you'd do with external solutions, such as haproxy > or keepalived. I don't think HA/Clustering/LB is for the "heart faint". > Whether you know what you're doing, or leave this matter alone! You'll > save your sanity in the medium term.. If You know how automobile works it doesn't means You want to build it just for Your own usage. But in our context, remember again, extra complexity means not only extra software, but extra infrastructure, i.e. extra hardware as well. I am using corosync, pacemaker, ipvs, ldirectord, drbd and keepalvied. But here we are discussing database cluster and it needs some other approach. I want to use some of such tools for distributing requests between coordinators and for failover of ipvs point of distribution and gtm. But I don't want standby data nodes. All nodes should be under load and there are should be enough redundancy to survive any one node lost. Health monitoring and failover should be done internally by XC in this case. > I don't understand why you keep citing MySQL as an example. *Don't take > me wrong here*, but if you feel it to be the right tool, just go with it I've already explained this here twice: it is not right tool, because it is in-memory database. But it has right clustering model and that is why I cite it here as good exemplar. > and leave the ones who think the same about Postgres-XC alone. This is good tool to close any discussion about anything. > Do you know anyone putting up a database cluster without > HA/Clustering/LB knowledge? If you do, please ask them to stop. This questing is not for me. Look cites above. > If at least this was a "who has more users" competition, that would > make sense. The best tools I use in my day-to-day job didn't come > easy! I don't agree with you on this, at all. But I agree with You at this point. But it is not about "easy way" or "more users". I don't think we should lose flexibility with clustering model where distribution scheme defined on cluster level. I believe it can include distribution on table level. So it may be default setting issue. Well designed complex things easy to use with default setting, but still provides enough flexibility. > I *only* had to change my biggest app DDL (which is generated by some > Java JPA tool) in order to test DISTRIBUTE BY. But I'm good with 100% > replication.. for now. In the end I made *zero* changes! I don't see how this story helps in production environment. *************************** ### Vladimir Stavrinov ### vst...@gm... *************************** |