Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After the cluster expansion completes the slot migration, the total dbsize value of all shard master nodes is greater than the value before the expansion #2715

Open
2 tasks done
903174293 opened this issue Jan 8, 2025 · 6 comments
Labels
bug type bug

Comments

@903174293
Copy link

Search before asking

  • I had searched in the issues and found no similar issues.

Version

v2.9.0

Minimal reproduce step

1、kvrocks version:v2.9.0
2、init env:3 shard,One master and one slave,Single Node Specifications:8C16G、disk:500GB
3、After expansion:5 shard,One master and one slave,Single Node Specifications:8C16G、disk:500GB
4、Expansion method:Calling the controller interface
5、Phenomenon:
Before expansion: the total dbsize value of all shard master nodes:12484
After expansion:the total dbsize value of all shard master nodes:12489

What did you expect to see?

The total dbsize of all master nodes is equal before and after expansion

What did you see instead?

Before expansion: the total dbsize value of all shard master nodes:12484
After expansion:the total dbsize value of all shard master nodes:12489

Anything Else?

Migration interface for controllers:
POST /api/v1/namespaces/clusters/{cluster name}/migrate
{
"target": 4,
"slot": 1000,
"slot_only": false
}

Are you willing to submit a PR?

  • I'm willing to submit a PR!
@903174293 903174293 added the bug type bug label Jan 8, 2025
@hongleis
Copy link

I found the same issue in my tests.
There are 3 nodes in my cluster before migration, after migration, there are 5 nodes in my cluster.
Here is my test steps:

  1. kvrocks version 2.9.0
  2. before migration: use scan command to check all the master nodes, there are 167005770 keys;
  3. set migrate-type =raw-key-value in kvrocks.conf
  4. call the kv controller migrate API: {{host}}/api/v1/namespaces/{{namespace}}/clusters/{{cluster}}/migrate
  5. call the kv controller API to check the migration is done: http://{{host}}/api/v1/namespaces/akv/clusters/kvrocks_29_test_migrate
  6. after migration: use scan command to check all the master nodes, there are 167016655 keys;
  7. there are 10885 keys depulicated after migration done.

feel free to contact me for more test details

@hongleis
Copy link

@git-hulk

@git-hulk
Copy link
Member

@903174293 @hongleis Thanks for your report. I have missed this issue in the past few days. I want to know:

  • Are all of those duplicated keys still living in the source node?
  • Do those duplicated keys have any new writes while migrating?

@903174293
Copy link
Author

@903174293 @hongleis Thanks for your report. I have missed this issue in the past few days. I want to know:

  • Are all of those duplicated keys still living in the source node?
  • Do those duplicated keys have any new writes while migrating?

yes,but it cannot be found using get, but can be found using keys*;
no

@hongleis
Copy link

@git-hulk for the duplicated key, it located in both source node and destionation node. when we use the redis get command ,it will go to the destionation node to search. But I connect to the source node(not cluster mode) and use redis scan command to search, I can find the deplicated key. In a word, it did not affect the first migration, but when we migration mulitiple times , i am afraid the depulicated keys which not cleaned may be the dirty values

@git-hulk
Copy link
Member

git-hulk commented Jan 17, 2025

@hongleis Got your point, thanks for both of your information. Will take a look at this issue when I get time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug type bug
Projects
None yet
Development

No branches or pull requests

3 participants