|
From: Juned K. <jkh...@gm...> - 2014-03-25 09:54:56
|
Hi Mason, how many transactions per second? there is almost 50 to 100 How much of the data needs to be replicated to all nodes and how much can be distributed? Equally, if i have two datanodes with 50 records then they should distributed equally How frequently do (reporting) queries need to look at all billion rows or a large portion thereof? 10 t0 20 times per day, we have 4 to 5 kind of CDR reports. Please suggest. On Mon, Mar 24, 2014 at 6:27 PM, Mason Sharp <ms...@tr...>wrote: > > > > On Mon, Mar 24, 2014 at 1:59 AM, Juned Khan <jkh...@gm...> wrote: > >> Hi Michel, >> >> Thank you very much for valuable response. >> >> As a solution of this problem can i use VPN or setup all components >> on same network ? >> >> Please suggest. >> >> >> > What should the behavior be when the network is down? Is it ok for there > to be an outage? > > I believe I also previously said XC may not be the best fit for what you > are doing. You could perhaps make it work by modifying the code, but it > depends on exactly what you are trying to do. You mention a billion rows, > but how many transactions per second? How much of the data needs to be > replicated to all nodes and how much can be distributed? How frequently do > (reporting) queries need to look at all billion rows or a large portion > thereof? Are writes typically done at each location for a particular set of > data (regional customers)? Or is that all sites need to write to all of the > data equally, that there is not a good way of splitting it up? > > -- > Mason Sharp > > TransLattice - http://www.translattice.com > Distributed and Clustered Database Solutions > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |