|
From: Juned K. <jkh...@gm...> - 2014-03-21 05:08:51
|
Hi All, I have very large application, as of now i am using MySQL but i want to convert it into postgre-xc. Queries: 1) What is the minimum server requirement for this application ? 2) pgxc will be able to handle 500 concurrent calls on server ? 3) is there any parameter where i can define maximum connection ? 4) If server crashes then i how to get data back. how to have back up existing data ? -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
|
From: Michael P. <mic...@gm...> - 2014-03-21 11:52:33
|
On Fri, Mar 21, 2014 at 2:08 PM, Juned Khan <jkh...@gm...> wrote: > Hi All, > > I have very large application, as of now i am using MySQL but i want to > convert it into postgre-xc. postgres-xc. By the way, is there any reason why you are *not* considering PostgreSQL as well? > 1) What is the minimum server requirement for this application ? There is no real *minimum* requirement AFAIK. However, coupling 1 Datanode and 1 Coordinator per server/VM could be useful to minimize network latency on queries. > 2) pgxc will be able to handle 500 concurrent calls on server ? Well yes, but a single Postgres server is able to handle that nicely as well if coupled with a connection pooler. I don't see much why multi-master is a requirement for such a low-number though as you seem to mean in your question. > 3) is there any parameter where i can define maximum connection ? max_connections on each Coordinator/Datanode * number of nodes. You could as well define different values of max_connections per node. > 4) If server crashes then i how to get data back. how to have back up > existing data ? This is a vast question, and without providing any details about what you are looking for in terms of backup and recovery strategy it is hard to guess anything but... This problem is similar to Postgres itself, except that in this case you need to manage N nodes and not 1. Just seeing your email and if you have a requirement with 500 concurrent connections, you might live happily with PostgreSQL + pgbouncer. Regards, -- Michael |
|
From: Michael P. <mic...@gm...> - 2014-03-24 02:44:59
|
On Fri, Mar 21, 2014 at 10:13 PM, Juned Khan <jkh...@gm...> wrote: > Hi Michael, > > > 2) pgxc will be able to handle 500 concurrent calls on server ? > Well yes, but a single Postgres server is able to handle that nicely > as well if coupled with a connection pooler. I don't see much why > multi-master is a requirement for such a low-number though as you seem > to mean in your question. > >> I want to use multi-master because, I have my server in different states >> as of now i have two servers but in future it will increase to four. And >> one of >> my table will have records more than one Billion, so i will make that >> table >> distributed so local query can executed on local servers. >> I have attached the diagram of my requirement. > > > Please suggest. Don't do that. Postgres-XC is designed to work with nodes located close to each other, and OLTP applications. With a cluster having nodes distributed geographically, your application performance would sink down because of GTM (XID/snapshot obtention, sequence value obtention) and network latency. This is the price to manage concurrency accurately. -- Michael |
|
From: Juned K. <jkh...@gm...> - 2014-03-24 05:59:29
|
Hi Michel, Thank you very much for valuable response. As a solution of this problem can i use VPN or setup all components on same network ? Please suggest. Regards Juned Khan On Mon, Mar 24, 2014 at 8:14 AM, Michael Paquier <mic...@gm...>wrote: > On Fri, Mar 21, 2014 at 10:13 PM, Juned Khan <jkh...@gm...> wrote: > > Hi Michael, > > > > > > 2) pgxc will be able to handle 500 concurrent calls on server ? > > Well yes, but a single Postgres server is able to handle that nicely > > as well if coupled with a connection pooler. I don't see much why > > multi-master is a requirement for such a low-number though as you seem > > to mean in your question. > > > >> I want to use multi-master because, I have my server in different states > >> as of now i have two servers but in future it will increase to four. And > >> one of > >> my table will have records more than one Billion, so i will make that > >> table > >> distributed so local query can executed on local servers. > >> I have attached the diagram of my requirement. > > > > > > Please suggest. > Don't do that. > > Postgres-XC is designed to work with nodes located close to each > other, and OLTP applications. With a cluster having nodes distributed > geographically, your application performance would sink down because > of GTM (XID/snapshot obtention, sequence value obtention) and network > latency. This is the price to manage concurrency accurately. > -- > Michael > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
|
From: Mason S. <ms...@tr...> - 2014-03-24 12:57:56
|
On Mon, Mar 24, 2014 at 1:59 AM, Juned Khan <jkh...@gm...> wrote: > Hi Michel, > > Thank you very much for valuable response. > > As a solution of this problem can i use VPN or setup all components > on same network ? > > Please suggest. > > > What should the behavior be when the network is down? Is it ok for there to be an outage? I believe I also previously said XC may not be the best fit for what you are doing. You could perhaps make it work by modifying the code, but it depends on exactly what you are trying to do. You mention a billion rows, but how many transactions per second? How much of the data needs to be replicated to all nodes and how much can be distributed? How frequently do (reporting) queries need to look at all billion rows or a large portion thereof? Are writes typically done at each location for a particular set of data (regional customers)? Or is that all sites need to write to all of the data equally, that there is not a good way of splitting it up? -- Mason Sharp TransLattice - http://www.translattice.com Distributed and Clustered Database Solutions |
|
From: Juned K. <jkh...@gm...> - 2014-03-25 09:54:56
|
Hi Mason, how many transactions per second? there is almost 50 to 100 How much of the data needs to be replicated to all nodes and how much can be distributed? Equally, if i have two datanodes with 50 records then they should distributed equally How frequently do (reporting) queries need to look at all billion rows or a large portion thereof? 10 t0 20 times per day, we have 4 to 5 kind of CDR reports. Please suggest. On Mon, Mar 24, 2014 at 6:27 PM, Mason Sharp <ms...@tr...>wrote: > > > > On Mon, Mar 24, 2014 at 1:59 AM, Juned Khan <jkh...@gm...> wrote: > >> Hi Michel, >> >> Thank you very much for valuable response. >> >> As a solution of this problem can i use VPN or setup all components >> on same network ? >> >> Please suggest. >> >> >> > What should the behavior be when the network is down? Is it ok for there > to be an outage? > > I believe I also previously said XC may not be the best fit for what you > are doing. You could perhaps make it work by modifying the code, but it > depends on exactly what you are trying to do. You mention a billion rows, > but how many transactions per second? How much of the data needs to be > replicated to all nodes and how much can be distributed? How frequently do > (reporting) queries need to look at all billion rows or a large portion > thereof? Are writes typically done at each location for a particular set of > data (regional customers)? Or is that all sites need to write to all of the > data equally, that there is not a good way of splitting it up? > > -- > Mason Sharp > > TransLattice - http://www.translattice.com > Distributed and Clustered Database Solutions > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
|
From: Mason S. <ma...@st...> - 2014-03-25 12:49:14
|
On Tue, Mar 25, 2014 at 5:54 AM, Juned Khan <jkh...@gm...> wrote: > Hi Mason, > > > how many transactions per second? > there is almost 50 to 100 > OK, that is not too much, regular PostgreSQL could work. Are a lot of those reads? Perhaps you can use PostgreSQL with streaming replication and do reads from a local hot standby, but writing to a remote master. It would have to be ok though that writes are not permitted if the network is down, as well as accepting that for those reads it may not be the latest version of the row/tuple, and the writes will incur latency. The other things you could do is, if your writes are mainly inserts is to write to local tables when the network is down, and then write these to the remote master once it is up again. > > How much of the data needs to be replicated to all nodes and how much can > be distributed? > Equally, if i have two datanodes with 50 records then they should > distributed equally > OK, so there is no easy, clean distribution rule for you, like location, country_code, etc. > > How frequently do (reporting) queries need to look at all billion rows or > a large portion thereof? > 10 t0 20 times per day, we have 4 to 5 kind of CDR reports. > > OK. A local hot standby should be able to handle it. If you need faster performance, you could do periodic loads into a Stado instance. > Please suggest. > > > > On Mon, Mar 24, 2014 at 6:27 PM, Mason Sharp <ms...@tr...>wrote: > >> >> >> >> On Mon, Mar 24, 2014 at 1:59 AM, Juned Khan <jkh...@gm...> wrote: >> >>> Hi Michel, >>> >>> Thank you very much for valuable response. >>> >>> As a solution of this problem can i use VPN or setup all components >>> on same network ? >>> >>> Please suggest. >>> >>> >>> >> What should the behavior be when the network is down? Is it ok for there >> to be an outage? >> >> I believe I also previously said XC may not be the best fit for what you >> are doing. You could perhaps make it work by modifying the code, but it >> depends on exactly what you are trying to do. You mention a billion rows, >> but how many transactions per second? How much of the data needs to be >> replicated to all nodes and how much can be distributed? How frequently do >> (reporting) queries need to look at all billion rows or a large portion >> thereof? Are writes typically done at each location for a particular set of >> data (regional customers)? Or is that all sites need to write to all of the >> data equally, that there is not a good way of splitting it up? >> >> -- >> Mason Sharp >> >> TransLattice - http://www.translattice.com >> Distributed and Clustered Database Solutions >> >> >> > > > -- > Thanks, > Juned Khan > iNextrix Technologies Pvt Ltd. > www.inextrix.com > > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/13534_NeoTech > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Services |
|
From: Juned K. <jkh...@gm...> - 2014-03-25 13:08:17
|
Hi Mason, OK, that is not too much, regular PostgreSQL could work. Are a lot of those reads? Perhaps you can use PostgreSQL with streaming replication and do reads from a local hot standby, but writing to a remote master. It would have to be ok though that writes are not permitted if the network is down, as well as accepting that for those reads it may not be the latest version of the row/tuple, and the writes will incur latency. The other things you could do is, if your writes are mainly inserts is to write to local tables when the network is down, and then write these to the remote master once it is up again for this scenario what application i should use, what powerful and stable application you do suggest to achieve this ? As per my requirement i would have two database server on each state (MASTER and SLAVE) so i have to write those data on Central server. Regards Juned Khan On Tue, Mar 25, 2014 at 5:56 PM, Mason Sharp <ma...@st...> wrote: > > > On Tue, Mar 25, 2014 at 5:54 AM, Juned Khan <jkh...@gm...> wrote: > >> Hi Mason, >> >> >> how many transactions per second? >> there is almost 50 to 100 >> > > OK, that is not too much, regular PostgreSQL could work. Are a lot of > those reads? Perhaps you can use PostgreSQL with streaming replication and > do reads from a local hot standby, but writing to a remote master. It would > have to be ok though that writes are not permitted if the network is down, > as well as accepting that for those reads it may not be the latest version > of the row/tuple, and the writes will incur latency. The other things you > could do is, if your writes are mainly inserts is to write to local tables > when the network is down, and then write these to the remote master once it > is up again. > > >> >> How much of the data needs to be replicated to all nodes and how much can >> be distributed? >> Equally, if i have two datanodes with 50 records then they should >> distributed equally >> > > OK, so there is no easy, clean distribution rule for you, like location, > country_code, etc. > > >> >> How frequently do (reporting) queries need to look at all billion rows or >> a large portion thereof? >> 10 t0 20 times per day, we have 4 to 5 kind of CDR reports. >> >> > OK. A local hot standby should be able to handle it. If you need faster > performance, you could do periodic loads into a Stado instance. > > >> Please suggest. >> >> >> >> On Mon, Mar 24, 2014 at 6:27 PM, Mason Sharp <ms...@tr...>wrote: >> >>> >>> >>> >>> On Mon, Mar 24, 2014 at 1:59 AM, Juned Khan <jkh...@gm...> wrote: >>> >>>> Hi Michel, >>>> >>>> Thank you very much for valuable response. >>>> >>>> As a solution of this problem can i use VPN or setup all components >>>> on same network ? >>>> >>>> Please suggest. >>>> >>>> >>>> >>> What should the behavior be when the network is down? Is it ok for there >>> to be an outage? >>> >>> I believe I also previously said XC may not be the best fit for what you >>> are doing. You could perhaps make it work by modifying the code, but it >>> depends on exactly what you are trying to do. You mention a billion rows, >>> but how many transactions per second? How much of the data needs to be >>> replicated to all nodes and how much can be distributed? How frequently do >>> (reporting) queries need to look at all billion rows or a large portion >>> thereof? Are writes typically done at each location for a particular set of >>> data (regional customers)? Or is that all sites need to write to all of the >>> data equally, that there is not a good way of splitting it up? >>> >>> -- >>> Mason Sharp >>> >>> TransLattice - http://www.translattice.com >>> Distributed and Clustered Database Solutions >>> >>> >>> >> >> >> -- >> Thanks, >> Juned Khan >> iNextrix Technologies Pvt Ltd. >> www.inextrix.com >> >> >> ------------------------------------------------------------------------------ >> Learn Graph Databases - Download FREE O'Reilly Book >> "Graph Databases" is the definitive new guide to graph databases and their >> applications. Written by three acclaimed leaders in the field, >> this first edition is now available. Download your free book today! >> http://p.sf.net/sfu/13534_NeoTech >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > -- > Mason Sharp > > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Services > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
|
From: Mason S. <ms...@tr...> - 2014-03-25 14:29:13
|
On Tue, Mar 25, 2014 at 9:08 AM, Juned Khan <jkh...@gm...> wrote: > Hi Mason, > > > OK, that is not too much, regular PostgreSQL could work. Are a lot of > those reads? Perhaps you can use PostgreSQL with streaming replication and > do reads from a local hot standby, but writing to a remote master. It would > have to be ok though that writes are not permitted if the network is down, > as well as accepting that for those reads it may not be the latest version > of the row/tuple, and the writes will incur latency. The other things you > could do is, if your writes are mainly inserts is to write to local tables > when the network is down, and then write these to the remote master once it > is up again > > for this scenario what application i should use, what powerful and stable > application you do suggest to achieve this ? > I don't know enough about your application, I was trying to give you some ideas for possible database architectures, though you may have some work ahead of you to put it in place if there is a lot of customization. People on the email list have been trying to help you over the last few weeks. Your company may want to consider bringing in an outside consulting company that can sign an NDA, meet with you, and work with you on a solution. -- Mason Sharp TransLattice - http://www.translattice.com Distributed and Clustered Database Solutions |
|
From: Juned K. <jkh...@gm...> - 2014-03-26 12:49:43
|
Hi Mason That is correct mailing list helped me a lot, and i am very thankful to everyone. Yeah earlier i have asked about this requirement and received good response and advices but the point which you have mentioned that is noticeable, i can not ignore that i should think about this many points before moving ahead. Regards, Juned Khan On Tue, Mar 25, 2014 at 7:59 PM, Mason Sharp <ms...@tr...>wrote: > > > > On Tue, Mar 25, 2014 at 9:08 AM, Juned Khan <jkh...@gm...> wrote: > >> Hi Mason, >> >> >> OK, that is not too much, regular PostgreSQL could work. Are a lot of >> those reads? Perhaps you can use PostgreSQL with streaming replication and >> do reads from a local hot standby, but writing to a remote master. It would >> have to be ok though that writes are not permitted if the network is down, >> as well as accepting that for those reads it may not be the latest version >> of the row/tuple, and the writes will incur latency. The other things you >> could do is, if your writes are mainly inserts is to write to local tables >> when the network is down, and then write these to the remote master once it >> is up again >> >> for this scenario what application i should use, what powerful and stable >> application you do suggest to achieve this ? >> > > I don't know enough about your application, I was trying to give you some > ideas for possible database architectures, though you may have some work > ahead of you to put it in place if there is a lot of customization. People > on the email list have been trying to help you over the last few weeks. > Your company may want to consider bringing in an outside consulting company > that can sign an NDA, meet with you, and work with you on a solution. > > -- > Mason Sharp > > TransLattice - http://www.translattice.com > Distributed and Clustered Database Solutions > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
|
From: Mason S. <ms...@tr...> - 2014-03-26 13:00:34
|
On Wed, Mar 26, 2014 at 8:49 AM, Juned Khan <jkh...@gm...> wrote: > Hi Mason > > That is correct mailing list helped me a lot, and i am very thankful to > everyone. > > Yeah earlier i have asked about this requirement and received good > response and advices > but the point which you have mentioned that is noticeable, i can not > ignore that > > i should think about this many points before moving ahead. > > BTW, I did not mean to discouarge you... I just did not want you to try to use XC for a solution where it is not a good fit (or, not a good fit without some custom development work) and end up saying that Postgres-XC is not good in general. -- Mason Sharp TransLattice - http://www.translattice.com Distributed and Clustered Database Solutions |
|
From: Koichi S. <koi...@gm...> - 2014-03-27 03:19:28
|
Juned; I'd like to know what kind of cluster you need to configure. Also would like to know if your application is OLTP, batch or analytic, as well as what XC feature you'd like. If it is OLTP, how much transactions your are handling and if each transaction is as simple as DBT-1/DBT-2 or bit more complicated as DBT-5. You're trying to add datanodes so I imaged you're trying to start small and then expand the cluster. I'd like to know how your application is growing in size and transaction. Also, I'd like to know how much HA feature you need and what kind of automatic failover tool you are planning to use. They are always very important points when you choose most suitable database for your application. They are very important to think in advance and if you can share these (not what is your application, but the above characteristics), we may be able to provide more suitable information to you. Best Regards --- Koichi Suzuki 2014-03-26 22:00 GMT+09:00 Mason Sharp <ms...@tr...>: > > > > On Wed, Mar 26, 2014 at 8:49 AM, Juned Khan <jkh...@gm...> wrote: >> >> Hi Mason >> >> That is correct mailing list helped me a lot, and i am very thankful to >> everyone. >> >> Yeah earlier i have asked about this requirement and received good >> response and advices >> but the point which you have mentioned that is noticeable, i can not >> ignore that >> >> i should think about this many points before moving ahead. >> > > BTW, I did not mean to discouarge you... I just did not want you to try to > use XC for a solution where it is not a good fit (or, not a good fit without > some custom development work) and end up saying that Postgres-XC is not good > in general. > > -- > Mason Sharp > > TransLattice - http://www.translattice.com > Distributed and Clustered Database Solutions > > > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/13534_NeoTech > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
|
From: Juned K. <jkh...@gm...> - 2014-03-27 11:24:33
|
Hi Koichi, As of now what i am trying to do is, i want to setup pgxc for one state. but in future this number of state will be increased upto five. I want to take the advantage multi-master functionality as well as data synchronization between all the states.with my application there will be 1000 of concurrent connections. For HA i am setting GTM master and slave if required i can setup slave for datanodes as well. from my application side i am thinking to use HearBeat for failover. i will only have one table in our application which will have billion of records. I am looking for such solution which satisfies multi-master and high-availability feature. this is th two feature is the main reason i want to use pgxc. Please suggest. Regards Juned Khan On Thu, Mar 27, 2014 at 8:49 AM, Koichi Suzuki <koi...@gm...>wrote: > Juned; > > I'd like to know what kind of cluster you need to configure. Also > would like to know if your application is OLTP, batch or analytic, as > well as what XC feature you'd like. If it is OLTP, how much > transactions your are handling and if each transaction is as simple as > DBT-1/DBT-2 or bit more complicated as DBT-5. > > You're trying to add datanodes so I imaged you're trying to start > small and then expand the cluster. I'd like to know how your > application is growing in size and transaction. > > Also, I'd like to know how much HA feature you need and what kind of > automatic failover tool you are planning to use. > > They are always very important points when you choose most suitable > database for your application. > > They are very important to think in advance and if you can share these > (not what is your application, but the above characteristics), we may > be able to provide more suitable information to you. > > Best Regards > --- > Koichi Suzuki > > > 2014-03-26 22:00 GMT+09:00 Mason Sharp <ms...@tr...>: > > > > > > > > On Wed, Mar 26, 2014 at 8:49 AM, Juned Khan <jkh...@gm...> wrote: > >> > >> Hi Mason > >> > >> That is correct mailing list helped me a lot, and i am very thankful to > >> everyone. > >> > >> Yeah earlier i have asked about this requirement and received good > >> response and advices > >> but the point which you have mentioned that is noticeable, i can not > >> ignore that > >> > >> i should think about this many points before moving ahead. > >> > > > > BTW, I did not mean to discouarge you... I just did not want you to try > to > > use XC for a solution where it is not a good fit (or, not a good fit > without > > some custom development work) and end up saying that Postgres-XC is not > good > > in general. > > > > -- > > Mason Sharp > > > > TransLattice - http://www.translattice.com > > Distributed and Clustered Database Solutions > > > > > > > > > ------------------------------------------------------------------------------ > > Learn Graph Databases - Download FREE O'Reilly Book > > "Graph Databases" is the definitive new guide to graph databases and > their > > applications. Written by three acclaimed leaders in the field, > > this first edition is now available. Download your free book today! > > http://p.sf.net/sfu/13534_NeoTech > > _______________________________________________ > > Postgres-xc-general mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
|
From: 鈴木 幸市 <ko...@in...> - 2014-03-28 00:52:19
|
What do you mean by “multi master”? If you mean “multi master replication”, it is reasonable not to expect write scalability. XC will provide read scalability in this case though. XC’s write scalability is achieved by both GTM (Global XID) and the combination of table sharding and replication. Stable tables should be replaced, while frequentlly-updated tables should be shared. Hope your application matches XC. --- Koichi Suzuki 2014/03/27 20:24、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: Hi Koichi, As of now what i am trying to do is, i want to setup pgxc for one state. but in future this number of state will be increased upto five. I want to take the advantage multi-master functionality as well as data synchronization between all the states.with my application there will be 1000 of concurrent connections. For HA i am setting GTM master and slave if required i can setup slave for datanodes as well. from my application side i am thinking to use HearBeat for failover. i will only have one table in our application which will have billion of records. I am looking for such solution which satisfies multi-master and high-availability feature. this is th two feature is the main reason i want to use pgxc. Please suggest. Regards Juned Khan On Thu, Mar 27, 2014 at 8:49 AM, Koichi Suzuki <koi...@gm...<mailto:koi...@gm...>> wrote: Juned; I'd like to know what kind of cluster you need to configure. Also would like to know if your application is OLTP, batch or analytic, as well as what XC feature you'd like. If it is OLTP, how much transactions your are handling and if each transaction is as simple as DBT-1/DBT-2 or bit more complicated as DBT-5. You're trying to add datanodes so I imaged you're trying to start small and then expand the cluster. I'd like to know how your application is growing in size and transaction. Also, I'd like to know how much HA feature you need and what kind of automatic failover tool you are planning to use. They are always very important points when you choose most suitable database for your application. They are very important to think in advance and if you can share these (not what is your application, but the above characteristics), we may be able to provide more suitable information to you. Best Regards --- Koichi Suzuki 2014-03-26 22:00 GMT+09:00 Mason Sharp <ms...@tr...<mailto:ms...@tr...>>: > > > > On Wed, Mar 26, 2014 at 8:49 AM, Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> wrote: >> >> Hi Mason >> >> That is correct mailing list helped me a lot, and i am very thankful to >> everyone. >> >> Yeah earlier i have asked about this requirement and received good >> response and advices >> but the point which you have mentioned that is noticeable, i can not >> ignore that >> >> i should think about this many points before moving ahead. >> > > BTW, I did not mean to discouarge you... I just did not want you to try to > use XC for a solution where it is not a good fit (or, not a good fit without > some custom development work) and end up saying that Postgres-XC is not good > in general. > > -- > Mason Sharp > > TransLattice - http://www.translattice.com<http://www.translattice.com/> > Distributed and Clustered Database Solutions > > > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/13534_NeoTech > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li...<mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com<http://www.inextrix.com/> ------------------------------------------------------------------------------ _______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |