|
From: Mason S. <ma...@st...> - 2014-03-25 12:49:14
|
On Tue, Mar 25, 2014 at 5:54 AM, Juned Khan <jkh...@gm...> wrote: > Hi Mason, > > > how many transactions per second? > there is almost 50 to 100 > OK, that is not too much, regular PostgreSQL could work. Are a lot of those reads? Perhaps you can use PostgreSQL with streaming replication and do reads from a local hot standby, but writing to a remote master. It would have to be ok though that writes are not permitted if the network is down, as well as accepting that for those reads it may not be the latest version of the row/tuple, and the writes will incur latency. The other things you could do is, if your writes are mainly inserts is to write to local tables when the network is down, and then write these to the remote master once it is up again. > > How much of the data needs to be replicated to all nodes and how much can > be distributed? > Equally, if i have two datanodes with 50 records then they should > distributed equally > OK, so there is no easy, clean distribution rule for you, like location, country_code, etc. > > How frequently do (reporting) queries need to look at all billion rows or > a large portion thereof? > 10 t0 20 times per day, we have 4 to 5 kind of CDR reports. > > OK. A local hot standby should be able to handle it. If you need faster performance, you could do periodic loads into a Stado instance. > Please suggest. > > > > On Mon, Mar 24, 2014 at 6:27 PM, Mason Sharp <ms...@tr...>wrote: > >> >> >> >> On Mon, Mar 24, 2014 at 1:59 AM, Juned Khan <jkh...@gm...> wrote: >> >>> Hi Michel, >>> >>> Thank you very much for valuable response. >>> >>> As a solution of this problem can i use VPN or setup all components >>> on same network ? >>> >>> Please suggest. >>> >>> >>> >> What should the behavior be when the network is down? Is it ok for there >> to be an outage? >> >> I believe I also previously said XC may not be the best fit for what you >> are doing. You could perhaps make it work by modifying the code, but it >> depends on exactly what you are trying to do. You mention a billion rows, >> but how many transactions per second? How much of the data needs to be >> replicated to all nodes and how much can be distributed? How frequently do >> (reporting) queries need to look at all billion rows or a large portion >> thereof? Are writes typically done at each location for a particular set of >> data (regional customers)? Or is that all sites need to write to all of the >> data equally, that there is not a good way of splitting it up? >> >> -- >> Mason Sharp >> >> TransLattice - http://www.translattice.com >> Distributed and Clustered Database Solutions >> >> >> > > > -- > Thanks, > Juned Khan > iNextrix Technologies Pvt Ltd. > www.inextrix.com > > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/13534_NeoTech > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Services |