All the performance benchmarks done in various configuration and regression testing, have delivered consistent results.
NitroCache scales with O(c) as number of threads increase. The fetch time remains constant irrespective of thread load.
Goal
- To run a fixed set of operations on 5 popular cache api and benchmark their in-memory performance, with increase in number of thread
- Evaluate total time for fetch, put operations and throughput (number of operations per second)
System Configuration
System Type |
x64-based PC |
Processor |
Intel(R) Core(TM) i5 CPU 650@3.20GHz, 3333 Mhz, 2 Core(s), 4 Logical Processor(s) |
OS |
Windows 7 |
Libraries
Cache |
Library |
NitroCache |
nitroCache.jar v0.4Beta |
Cach4j |
cache4j_0.4.jar |
Ehcache |
ehcache-1.2.3.jar |
JCS |
jcs-1.3.jar |
Infinispan |
core-4.0.0.Final.jar |
Testing Setting:
- All caches set to run in complete in-memory mode
- Seven tests run per cache api with following setting
- Rest of the settings are default
Setting Id |
Cache Size |
Threads |
Number of Fetch Operations |
Number of Put Operations |
Number of unique keys per Thread |
1 |
5000 |
1 |
400,000,000 |
20,000,000 |
10,000,000 |
2 |
5000 |
2 |
400,000,000 |
20,000,000 |
5,000,000 |
3 |
5000 |
5 |
400,000,000 |
20,000,000 |
2,000,000 |
4 |
5000 |
10 |
400,000,000 |
20,000,000 |
1,000,000 |
5 |
5000 |
15 |
399,600,000 |
19,980,000 |
666,667 |
6 |
5000 |
25 |
400,000,000 |
20,000,000 |
400,000 |
7 |
5000 |
50 |
400,000,000 |
20,000,000 |
200,000 |
Test Data
A random set of 2000 key/value pairs of integers used as seed data. These 2000 keys are then appended with loop id to generate secondary keys. See the AllTest.java from test-src zip in the download section for sample test.
Results
1. Total Fetch Time in Seconds by thread
Threads |
Cache4j |
EhCache |
Infinispan |
JCS |
Nitro FIFO |
Nitro LRU |
1 |
334.02 |
60.50 |
373.40 |
115.90 |
70.10 |
61.05 |
2 |
1,132.93 |
277.37 |
474.05 |
480.35 |
85.88 |
149.06 |
5 |
2,875.26 |
759.24 |
516.98 |
997.20 |
126.53 |
164.87 |
10 |
5,711.93 |
1,748.20 |
495.62 |
1,497.55 |
122.21 |
139.80 |
15 |
8,703.11 |
2,494.02 |
475.10 |
887.05 |
122.02 |
136.11 |
25 |
14,893.28 |
4,500.01 |
443.93 |
912.71 |
126.51 |
133.57 |
50 |
29,921.08 |
8,547.90 |
427.26 |
849.77 |
109.96 |
132.88 |

2. Total Put Time in Seconds by thread
Threads |
Cache4j |
EhCache |
Infinispan |
JCS |
Nitro FIFO |
Nitro LRU |
1 |
34.85 |
23.38 |
72.39 |
137.66 |
20.66 |
20.41 |
2 |
66.91 |
43.46 |
103.57 |
371.08 |
50.27 |
107.47 |
5 |
151.11 |
94.44 |
354.87 |
883.66 |
120.11 |
499.97 |
10 |
281.84 |
172.47 |
1,179.43 |
2,605.02 |
444.45 |
1,165.58 |
15 |
419.62 |
239.93 |
2,062.05 |
5,297.97 |
731.53 |
1,816.03 |
25 |
679.09 |
390.27 |
3,572.85 |
9,449.96 |
1,323.78 |
3,106.88 |
50 |
1,434.40 |
699.67 |
7,541.39 |
19,065.07 |
2,733.08 |
6,163.37 |
[[img src=PutTimeByThreadsAll.jpg alt="Graph for Put Time by thread"]
3. Fetch Throughput: Thousand operations/second
Threads |
Cache4j |
EhCache |
Infinispan |
JCS |
Nitro FIFO |
Nitro LRU |
1 |
1,197.55 |
6,611.79 |
1,071.24 |
3,451.13 |
5,705.89 |
6,551.68 |
2 |
353.07 |
1,442.11 |
843.79 |
832.72 |
4,657.61 |
2,683.43 |
5 |
139.12 |
526.85 |
773.73 |
401.12 |
3,161.21 |
2,426.17 |
10 |
70.03 |
228.81 |
807.07 |
267.10 |
3,273.05 |
2,861.19 |
15 |
45.91 |
160.22 |
841.08 |
450.48 |
3,274.82 |
2,935.88 |
25 |
26.86 |
88.89 |
901.05 |
438.26 |
3,161.86 |
2,994.68 |
50 |
13.37 |
46.80 |
936.20 |
470.71 |
3,637.69 |
3,010.19 |

4. Total Throughput: In thousand operations/second
Threads |
Cache4j |
EhCache |
Infinispan |
JCS |
Nitro FIFO |
Nitro LRU |
1 |
1,167.85 |
6,337.68 |
1,033.39 |
3,293.71 |
5,480.28 |
6,286.36 |
2 |
350.49 |
1,395.35 |
812.80 |
795.64 |
4,454.76 |
2,564.51 |
5 |
138.80 |
511.84 |
739.57 |
383.10 |
3,018.60 |
2,312.54 |
10 |
70.07 |
223.43 |
769.45 |
254.75 |
3,119.34 |
2,725.76 |
15 |
46.00 |
156.56 |
801.49 |
429.21 |
3,120.18 |
2,796.60 |
25 |
26.98 |
87.10 |
858.41 |
417.49 |
3,012.01 |
2,852.39 |
50 |
13.40 |
45.93 |
891.74 |
448.35 |
3,464.81 |
2,867.00 |

Conclusion
3. Ehcache and cach4j are fastest for put operations but slowest for fetch
4. JCS is slowest for put operation but overall throughput is average, in between infinispan and Ehcache
Profiling Data
Charts of Memory usage/GC, CPU usage percentage v/s Live Thread
1. Nitro FIFO

2. Nitro LRU

3. Cach4j

4. EhCache

5. Infinispan

6. JCS
