|
From: Juned K. <jkh...@gm...> - 2014-03-26 12:49:43
|
Hi Mason That is correct mailing list helped me a lot, and i am very thankful to everyone. Yeah earlier i have asked about this requirement and received good response and advices but the point which you have mentioned that is noticeable, i can not ignore that i should think about this many points before moving ahead. Regards, Juned Khan On Tue, Mar 25, 2014 at 7:59 PM, Mason Sharp <ms...@tr...>wrote: > > > > On Tue, Mar 25, 2014 at 9:08 AM, Juned Khan <jkh...@gm...> wrote: > >> Hi Mason, >> >> >> OK, that is not too much, regular PostgreSQL could work. Are a lot of >> those reads? Perhaps you can use PostgreSQL with streaming replication and >> do reads from a local hot standby, but writing to a remote master. It would >> have to be ok though that writes are not permitted if the network is down, >> as well as accepting that for those reads it may not be the latest version >> of the row/tuple, and the writes will incur latency. The other things you >> could do is, if your writes are mainly inserts is to write to local tables >> when the network is down, and then write these to the remote master once it >> is up again >> >> for this scenario what application i should use, what powerful and stable >> application you do suggest to achieve this ? >> > > I don't know enough about your application, I was trying to give you some > ideas for possible database architectures, though you may have some work > ahead of you to put it in place if there is a lot of customization. People > on the email list have been trying to help you over the last few weeks. > Your company may want to consider bringing in an outside consulting company > that can sign an NDA, meet with you, and work with you on a solution. > > -- > Mason Sharp > > TransLattice - http://www.translattice.com > Distributed and Clustered Database Solutions > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |