Hi, I have been trying to test the TPC-C performance against the PostgreSQL server running on Ubuntu Server 12.10 64-bit in a Virtual Machine with the following specifications: 4 vCPUs, 16 GB RAM, 50GB Disk and PostgreSQL 9.2.4 64-bit. I have set shared_buffers to 4096MB and kernel.shmmax to 819235520.
ESXi 4 host Server: Intel E5520 @ 2.27 GHz, 8 cores, 2 sockets, 16 GB RAM, HDD 552 GB, 7200 RPM.
I am not able to get a flat profile in the transactions counter (even after 4-5minutes it still contains numerous peaks and troughs), moreover the TPM number is very low and varies between 5000 and 12000.
I have tried a TPC-C test schema of 100, 80 and 40 warehouses (did not change the driver script options). There are no other VMs running on the same ESXi host and also there are no other applications using the resources on the VM.
Any suggestions what could be causing this unstable behavior, is it a hardware bottleneck or perhaps the PostgreSQL server configuration?
Usually these sort of issues are down to Postgres configuration followed by disk I/O configuration for logging.
For Postgres 9.2.2 on 2 socket E5 I have measured 162874 NOPM / 375123 TPM with the configuration below (showing uncommented parameters) so you should have plenty more performance left and should be able to achieve a smooth transaction profile. Running with VMware and 4 vCPUS should be OK although it is good testing practice first to find the best performance on native before going to virtualized so you know where the performance potential lies.
listen_addresses = '172.17.1.98' # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 256 # (change requires restart)
shared_buffers = 32000MB # min 128kB
temp_buffers = 2000MB # min 800kB
work_mem = 16MB # min 64kB
maintenance_work_mem = 512MB # min 1MB
max_stack_depth = 7MB # min 100kB
max_files_per_process = 4000 # min 25
effective_io_concurrency = 16 # 1-1000; 0 disables prefetching
wal_level = minimal # minimal, archive, or hot_standby
synchronous_commit = off # synchronization level;
wal_buffers = 32MB # min 32kB, -1 sets based on shared_buffers
checkpoint_segments = 64 # in logfile segments, min 1, 16MB each
checkpoint_timeout = 1h # range 30s-1h
checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0
checkpoint_warning = 0 # 0 disables
seq_page_cost = 2.0 # measured on an arbitrary scale
random_page_cost = 2.0 # same scale as above
effective_cache_size = 32000MB
default_statistics_target = 5000 # range 1-10000
log_destination = 'stderr' # Valid values are combinations of
logging_collector = on # Enable capturing of stderr and csvlog
log_line_prefix = '%t ' # special values:
log_timezone = 'GB'
autovacuum = off # Enable autovacuum subprocess? 'on'
datestyle = 'iso, mdy'
timezone = 'GB'
lc_messages = 'en_US.UTF-8' # locale for system error message
lc_monetary = 'en_US.UTF-8' # locale for monetary formatting
lc_numeric = 'en_US.UTF-8' # locale for number formatting
lc_time = 'en_US.UTF-8' # locale for time formatting
default_text_search_config = 'pg_catalog.english'
max_locks_per_transaction = 64 # min 10
max_pred_locks_per_transaction = 64 # min 10
thank you very much for the suggestions.
I have copied your configuration parameters into my postgressql.config, except for the max_connections=100 and shared_buffers=effective_cache_size=7000MB. Then i have rebuilt the TPC-C schema with 100 warehouses (i had it for 40 warehouses) restarted the Postgresql server and run 4 throughput pre-tests with 1, 2, 4 and 8 virtual users in turn.
I did manage to get a very nice and smooth TPM profile (~22000 TPM) for 1 virtual user, but the results for subsequent pre-tests for 2 (~30000TPM), 4 (~30000TPM) and in particular 8 (only ~4000TPM) virtual users again contain numerous peaks and troughs.
Could it be caused by the size of kernel.shmmax 8192MB or what could be causing this instability in the performance? My current TPC-C schema is around 15GB.
Log in to post a comment.
Sign up for the SourceForge newsletter:
You seem to have CSS turned off.
Please don't fill out this field.