Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pikiwidb自身配置的缓存失效 #3014

Open
fdl66 opened this issue Feb 13, 2025 · 7 comments
Open

pikiwidb自身配置的缓存失效 #3014

fdl66 opened this issue Feb 13, 2025 · 7 comments
Labels
☢️ Bug Something isn't working

Comments

@fdl66
Copy link

fdl66 commented Feb 13, 2025

Is this a regression?

Yes

Description

下面配置文件里面配置的缓存, 在4.0.0版本是生效的, 在4.0.2-alpha版本中,失效了。
使用info cache命令查看缓存信息,以及查看进程实际的内存占用(只有table-reader、memtable的占用),都没有生效。

###################
## Cache Settings
###################
# the number of caches for every db
cache-num : 16

# cache-model 0:cache_none 1:cache_read
cache-model : 1
# cache-type: string, set, zset, list, hash, bit
cache-type : string, set, zset, list, hash, bit

# Maximum number of keys in the zset redis cache
# On the disk DB, a zset field may have many fields. In the memory cache, we limit the maximum
# number of keys that can exist in a zset,  which is zset-zset-cache-field-num-per-key, with a
# default value of 512.
zset-cache-field-num-per-key : 512

# If the number of elements in a zset in the DB exceeds zset-cache-field-num-per-key,
# we determine whether to cache the first 512[zset-cache-field-num-per-key] elements
# or the last 512[zset-cache-field-num-per-key] elements in the zset based on zset-cache-start-direction.
#
# If zset-cache-start-direction is 0, cache the first 512[zset-cache-field-num-per-key] elements from the header
# If zset-cache-start-direction is -1, cache the last 512[zset-cache-field-num-per-key] elements
zset-cache-start-direction : 0


# the cache maxmemory of every db, configuration 10G
cache-maxmemory : 10737418240

# cache-maxmemory-policy
# 0: volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# 1: allkeys-lru -> Evict any key using approximated LRU.
# 2: volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# 3: allkeys-lfu -> Evict any key using approximated LFU.
# 4: volatile-random -> Remove a random key among the ones with an expire set.
# 5: allkeys-random -> Remove a random key, any key.
# 6: volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# 7: noeviction -> Don't evict anything, just return an error on write operations.
cache-maxmemory-policy : 1

# cache-maxmemory-samples
cache-maxmemory-samples : 5

# cache-lfu-decay-time
cache-lfu-decay-time : 1

Please provide a link to a minimal reproduction of the bug

No response

Screenshots or videos

No response

Please provide the version you discovered this bug in (check about page for version information)

验证版本: 4.0.2-alpha

Anything else?

No response

@fdl66 fdl66 added the ☢️ Bug Something isn't working label Feb 13, 2025
@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Title: = The cache configured by pikiwidb itself is invalid

@Mixficsol
Copy link
Collaborator

info cache 命令可以提供一下吗,看下具体的情况,需要一个复现步骤

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Can the info cache command be provided? To see the specific situation, a reproduction step is required

@fdl66
Copy link
Author

fdl66 commented Feb 19, 2025

复现步骤:

  1. 使用4.0.2-alpha版本
  2. 执行如下命令写入数据(执行个几分钟停掉即可):
./memtier_benchmark -t 20 -c 50 --pipeline=50 -s 127.0.0.1 -p 9221 --distinct-client-seed  --command="set __key__ __data__" --key-prefix="kv_" --key-minimum=1 --key-maximum=50000 --random-data --data-size=2048 --test-time=7200
  1. 执行如下命令获取数据(执行个几分钟停掉即可):
./memtier_benchmark -t 20 -c 50 --pipeline=50 -s 127.0.0.1 -p 9221 --distinct-client-seed --command="get __key__" --key-prefix="kv_" --key-minimum=1 --key-maximum=50000 --test-time=1800
  1. 4.0.0版本 执行info cache 命令, 显示缓存里面有数据;
    4.0.2-alpha版本, 执行info cache 命令, 输出显示缓存是空的。
127.0.0.1:9221> info cache
# Cache
cache_status:Ok
cache_db_num:16
cache_keys:0
cache_memory:156928
cache_memory_human:0M
hits:0
all_cmds:0
hits_per_sec:0
read_cmd_per_sec:0
hitratio_per_sec:0%
hitratio_all:0%
load_keys_per_sec:0
waitting_load_keys_num:0

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


I tried this problem locally. The reason is that RTC has been added in version 4.0.2. In principle, the redis-cache function is used but no metrics are counted. Currently, RTC supports limited commands, and you can cooperate with MSET and MGET commands Test the redis-cache function. My test here is OK, but we will fix the errors in statistics you mentioned later. The Redis-cahce function is still normal.

@Mixficsol
Copy link
Collaborator

本地复现出现了这个问题,原因是 get 命令更新 Cache 的时候给 Key 设置了一个过期时间,导致查询的时候一直 Get 不到 Cache 里面的数据,目前正在修复中,在对其他的命令是否有相同的问题进行排查

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


This problem occurred in local reproduction. The reason is that when the get command updates the cache, it sets an expiration time for the Key, which causes the data in the cache to be not available during the query. It is currently being repaired. Is there any same thing for other commands? Troubleshooting issues

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
☢️ Bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants