You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+14-12Lines changed: 14 additions & 12 deletions
Original file line number
Diff line number
Diff line change
@@ -90,14 +90,14 @@ MemoryCache is perfectly servicable. But in some situations, it can be a bottlen
90
90
91
91
# Performance
92
92
93
-
## Lru Hit rate
93
+
## ConcurrentLru Hit rate
94
94
95
95
The charts below show the relative hit rate of classic LRU vs Concurrent LRU on a [Zipfian distribution](https://en.wikipedia.org/wiki/Zipf%27s_law) of input keys, with parameter *s* = 0.5 and *s* = 0.86 respectively. If there are *N* items, the probability of accessing an item numbered *i* or less is (*i* / *N*)^*s*.
96
96
97
97
Here *N* = 50000, and we take 1 million sample keys. The hit rate is the number of times we get a cache hit divided by 1 million.
98
98
This test was repeated with the cache configured to different sizes expressed as a percentage *N* (e.g. 10% would be a cache with a capacity 5000).
99
99
100
-
When the cache is small, below 15% of the total key space, ConcurrentLru outperforms ClassicLru.
100
+
When the cache is small, below 15% of the total key space, ConcurrentLru outperforms Lru. In the best case, for *s*=0.5, when the cache is 2.5% of the total key space ConcurrentLru outperforms LRU by more than 50%.
101
101
102
102
<table>
103
103
<tr>
@@ -110,14 +110,16 @@ When the cache is small, below 15% of the total key space, ConcurrentLru outperf
110
110
</tr>
111
111
</table>
112
112
113
-
## Lru Benchmarks
113
+
## ConcurrentLru Benchmarks
114
114
115
115
In the benchmarks, a cache miss is essentially free. These tests exist purely to compare the raw execution speed of the cache code. In a real setting, where a cache miss is presumably quite expensive, the relative overhead of the cache will be very small.
116
116
117
117
Benchmarks are based on BenchmarkDotNet, so are single threaded. The ConcurrentLru family of classes can outperform ClassicLru in multithreaded workloads.
118
118
119
+
All benchmarks below are run on this measly laptop:
@@ -135,7 +137,6 @@ Take 1000 samples of a [Zipfian distribution](https://en.wikipedia.org/wiki/Zipf
135
137
136
138
Cache size = *N* / 10 (so we can cache 10% of the total set). ConcurrentLru has approximately the same performance as ClassicLru in this single threaded test.
137
139
138
-
139
140
| Method | Mean | Error | StdDev | Ratio | RatioSD |
0 commit comments