forked from moonpolysoft/dynomite
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathNOTES
106 lines (63 loc) · 3.5 KB
/
NOTES
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
- bad things that happen when nodes share a data dir:
- nodes after the 2nd do not join correctly on startup
- seeming instability in get/put as nodes join/leave?
- missing tests for node joining?
- after activating a new node in a group, got an error when getting a key:
Error in process <0.774.0> on node 'b5@localhost' with exit value: {{case_clause,{exit,{{function_clause,[{vector_clock,resolve,[not_found,{[{'b1@localhost',1.227209e+09},{'b2@localhost',1.227212e+09},{'b3@localhost',1.227212e+09},{'b4@localhost',1.227212e+09}],[<<50 bytes>>]}]},{mediator,internal_get,2},{mediator,'-handle_call/3-fun-0-',3}]},{gen_server...
- is gossip necessary? it's very busy, can we instead just use erlang node
monitoring?
- write erl load client, calls mediator directly via rpc, to exclude
python/thrift part of load
- do load tests w/dmerkle and syncing totally disabled
ec2 baseline
4 clients on 1 server, 4 nodes:
[root@domU-12-31-38-00-A1-D8 pylibs]# PYTHONPATH=. ./tools/dbench_thrift.py -n 1000 -c 4
. . . .
4 client(s) 1000 request(s) 288.8384120.3s
get avg: 19.2921230.3ms mean: 7.6100830.3ms 99.9: 128.2091140.3ms
put avg: 52.9174800.3ms mean: 44.9030400.3ms 99.9: 192.3902030.3ms
10 clients on 1 server, 4 nodes:
[root@domU-12-31-38-00-A1-D8 pylibs]# PYTHONPATH=. ./tools/dbench_thrift.py -n 1000 -c 10
. . . . . . . . . .
10 client(s) 1000 request(s) 2450.8665540.3s
get avg: 69.8761190.3ms mean: 63.1911750.3ms 99.9: 479.4890880.3ms
put avg: 175.2105370.3ms mean: 164.6809580.3ms 99.9: 581.4800260.3ms
4 x 10 x 300, null storage
gets: 11248 puts: 11248 collisions: 0
get avg: 24.6835420.3ms median: 2.5188920.3ms 99.9: 235.6879710.3ms
put avg: 30.9301050.3ms median: 7.8029630.3ms 99.9: 255.0349240.3ms
gets: 11579 puts: 11579 collisions: 0
get avg: 25.0566360.3ms median: 3.5700800.3ms 99.9: 226.4919280.3ms
put avg: 33.0305070.3ms median: 8.1269740.3ms 99.9: 269.0911290.3ms
4 x 10 x 300, couch storage
gets: 10369 puts: 10369 collisions: 0
get avg: 104.3878090.3ms median: 57.7960010.3ms 99.9: 420.2690120.3ms
put avg: 123.4235350.3ms median: 82.6609130.3ms 99.9: 437.8299710.3ms
4 x 10 x 300, couch storage, native
gets: 9844 puts: 9844 collisions: 0
get avg: 101.5120820.3ms median: 31.1710830.3ms 99.9: 546.7522140.3ms
put avg: 121.4775550.3ms median: 62.1631150.3ms 99.9: 549.5018960.3ms
4 x 10 x 300, dict storage
gets: 10449 puts: 10449 collisions: 0
get avg: 95.9851340.3ms median: 44.1510680.3ms 99.9: 435.3158470.3ms
put avg: 105.6870590.3ms median: 64.7318360.3ms 99.9: 439.2538070.3ms
4 x 10 x 300, dets storage
gets: 10140 puts: 10140 collisions: 0
get avg: 87.4485480.3ms median: 27.1880630.3ms 99.9: 379.3861870.3ms
put avg: 99.5866170.3ms median: 40.2989390.3ms 99.9: 401.5259740.3ms
4 x 10 x 300, fs storage
gets: 9873 puts: 9873 collisions: 0
get avg: 92.6405680.3ms median: 28.5089020.3ms 99.9: 426.8629550.3ms
put avg: 119.9092360.3ms median: 67.6469800.3ms 99.9: 484.2360020.3ms
4 x 10 x 300, mnesia storage, native
gets: 10274 puts: 10274 collisions: 0
get avg: 92.2162250.3ms median: 31.3360690.3ms 99.9: 529.8640730.3ms
put avg: 104.4972280.3ms median: 52.9170040.3ms 99.9: 552.8559680.3ms
4 x 10 x 300, mnesia storage
gets: 10152 puts: 10152 collisions: 0
get avg: 95.6529800.3ms median: 33.9150430.3ms 99.9: 439.4080640.3ms
put avg: 109.1451080.3ms median: 59.6549510.3ms 99.9: 490.4549120.3ms
10 x 4 x 300, mnesia storage
gets: 11749 puts: 11749 collisions: 0
get avg: 3.6935300.3ms median: 1.7809870.3ms 99.9: 127.9220580.3ms
put avg: 4.5230410.3ms median: 2.0890240.3ms 99.9: 93.1501390.3ms