Hi all,
I am trying to understand more clearly how should the Burstiness Settings
(Burst Length + Transfer Delay) affect my storage disk performance?
When I do my test (100% Read, 100% Random, 8KB Transfer Size, 16 Oustanding
I/Os), my storage gives me 1000 IOPS with an average response time of 12ms.
If I use the same settings but I change the Burst Length to 1000 and the
Transfer Delay stays at 0ms, my storage performance drops to 700 IOPS and
the average response time increases to 14ms.
Since I kept my transfer delay to 0ms, the workload should be continuous (no
burst) and thus the IOPS should not be affected? Am I right?
I didn't find much details in the IOMeter Userguide for the Burstiness
setting but I found the following:
"If the Transfer Delay value is 0, the Burst Length is not significant
because there is no
delay between bursts."
So based on this information and my understanding of the Burstiness
settings, why do I not have the same performance for those two tests?
If someone has more knowledge on those settings, I would gladly appreciate
your sharing.
(Be aware that all other IOmeter settings were kept to their default values.
The same logical device was used for both tests and no other workloads were
generated on this storage subsystem during those tests.)
--
View this message in context: http://old.nabble.com/Need-clarifications-on-IOMeter-Burstiness-Settings-%28Burst-Length-%2B-Transfer-Delay%29-tp33414580p33414580.html
Sent from the iometer-user mailing list archive at Nabble.com.
|