|
From: Koichi S. <koi...@gm...> - 2013-10-07 10:42:22
|
I do hope your work with Postgres-XC is successful. Please write here if you have any other issues. Good Luck; --- Koichi Suzuki 2013/10/4 Hector M. Jacas <hec...@et...> > > Hi all, > > First of all, I would like to thank Mr. Koichi Suzuki for their comments > to my previous post. Was right with regard to the parameter max_connection. > In the new configuration each DataNode has a maximum of 800 (three > coordinators with 200 concurrent connections each plus 200 extra) > > Regarding the suggestion about stress tool to use (dbt-1), I'm in the > study of their characteristics and peculiarities of installation and use. > > When I get my first results with it I publishes in this forum. > > I still have details to resolve as is the use of parameter > gtmPxyExtraConfig of pgxc_ctl.conf to include parameters (as worker_threads > = xx, for example) in the gtm_proxy settings. > > This was one of the problems detected during the initial deployment > through pgxc_ctl tool. > > This is the second post about my impressions with pgxc-1.1 > > In the previous post exposed them details of the scenario we want to build > as well as configurations necessary to reach a state of functionality and > operability acceptable. > > Once you reach that point, we design and execute a set of tests (based on > pgbench) in order to measure the performance of our installation and know > when we reached our goals: 300 (or more) concurrent connections and > increase the number of transactional operations. > > The modifications was made mainly on DataNodes config. These changes weres > implemented through datanodeExtraConfig parameter (pgxc_ctl.conf file) and > were as follows: > > # ==============================**================== > # Added to all the DataNode postgresql.conf > # Original: datanodeExtraConfig > log_destination = 'syslog' > logging_collector = on > log_directory = 'pg_log' > listen_addresses = '*' > max_connections = 800 > work_mem = 100MB > fsync = off > shared_buffers = 5GB > wal_buffers = 1MB > checkpoint_timeout = 5min > effective_cache_size = 12GB > checkpoint_segments = 64 > checkpoint_completion_target = 0.9 > maintenance_work_mem = 4GB > max_prepared_transactions = 800 > synchronous_commit = off > > These modifications allowed us to obtain results with increases between > two and three times (in some cases, more than three times) with respect to > the initial results set (number of transactions and tps). > > Our other goal, get more than 300 concurrent connections is reached and > can measure up to 355 connections. > > During the course of testing measurements were obtained on the consumption > of CPU and memory resources on each of the components of the cluster (dn01, > DN02, DN03, GTM). > > For these measurements, use the SAR command parameters: > > -R Report memory utilization statistics > -U Report CPU utilization > > 200 iterations with an interval of 5 seconds (14 test with a duration of > 60 seconds divided by 5 seconds is +/- 170 iterations) > > The averages obtained by server: > > This command was executed on each servers before launch the tests: > sar -u -r 5 200 > > dn01: > Average: CPU %user %nice %system %iowait %steal > %idle > Average: all 15.79 0.00 18.20 0.44 0.00 > 65.57 > > Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit > %commit > Average: 12003982 4327934 26.50 44163 1766752 7616567 > 37.35 > > dn02: > Average: CPU %user %nice %system %iowait %steal > %idle > Average: all 14.89 0.00 17.37 0.11 0.00 > 67.62 > > Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit > %commit > Average: 12097661 4234255 25.93 42716 1725394 7609960 > 37.31 > > dn03: > Average: CPU %user %nice %system %iowait %steal > %idle > Average: all 16.67 0.00 19.59 0.57 0.00 > 63.17 > > Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit > %commit > Average: 11603908 4728008 28.95 42955 1708146 7609769 > 37.31 > > gtm: > Average: CPU %user %nice %system %iowait %steal > %idle > Average: all 8.54 0.00 24.80 0.12 0.00 > 66.54 > > Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit > %commit > Average: 3553938 370626 9.44 42358 120419 723856 > 9.06 > > The result obtained in each of the tests: > > [root@rhelclient ~]# pgbench -c 16 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 16 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 23680 > tps = 394.458636 (including connections establishing) > tps = 394.637063 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 16 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 16 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 108929 > tps = 1815.247714 (including connections establishing) > tps = 1815.947505 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 16 -j 16 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 16 > number of threads: 16 > duration: 60 s > number of transactions actually processed: 23953 > tps = 399.034541 (including connections establishing) > tps = 399.120451 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 16 -j 16 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 16 > number of threads: 16 > duration: 60 s > number of transactions actually processed: 127142 > tps = 2118.825088 (including connections establishing) > tps = 2119.318006 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 96 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 96 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 95644 > tps = 1592.722011 (including connections establishing) > tps = 1595.906611 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 96 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 96 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 580728 > tps = 9675.754717 (including connections establishing) > tps = 9695.954649 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 64 -j 32 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 64 > number of threads: 32 > duration: 60 s > number of transactions actually processed: 72239 > tps = 1183.511659 (including connections establishing) > tps = 1184.529232 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 64 -j 32 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 64 > number of threads: 32 > duration: 60 s > number of transactions actually processed: 388861 > tps = 6479.326642 (including connections establishing) > tps = 6482.532350 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 64 -j 64 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 64 > number of threads: 64 > duration: 60 s > number of transactions actually processed: 61663 > tps = 1026.636406 (including connections establishing) > tps = 1027.679280 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 64 -j 64 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 64 > number of threads: 64 > duration: 60 s > number of transactions actually processed: 369321 > tps = 6151.931064 (including connections establishing) > tps = 6155.611035 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 104 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 104 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 80479 > tps = 1337.396423 (including connections establishing) > tps = 1347.248687 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 104 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 104 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 587109 > tps = 9782.401960 (including connections establishing) > tps = 9805.111450 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 300 -j 10 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 300 > number of threads: 10 > duration: 60 s > number of transactions actually processed: 171351 > tps = 2849.021939 (including connections establishing) > tps = 2869.345032 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 300 -j 10 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 300 > number of threads: 10 > duration: 60 s > number of transactions actually processed: 1177464 > tps = 19613.592584 (including connections establishing) > tps = 19716.537285 (excluding connections establishing) > > [root@rhelclient ~]# > > > Our new goal is to provide to our pgxc cluster of characteristics of High > Availability adding slave servers for DataNodes (and perhaps for > coordinators servers) > > When we get this environment, run again the same set of tests above, plus > new ones, such as fault recovery tests simulating the loss of cluster > components, etc.. > > Ok, that's all for now. > > Thank you very much. This is a great project and I'm very glad I found it. > > Thanks again, > > Hector M. Jacas > --- > This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE > running at host imx3.etecsa.cu > Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com> > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |