You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 13, 2022. It is now read-only.
Due to an unrelated bug I had to reboot a server which had been working without issue, reloaded the write back cache and noticed in 'atop -d -a' that all logical volumes are writing directly to the backend device (sata).
If I create a new VPS and obviously a new logical volume the new VPS hits the cache, all the ones created pre reboot hit the slow block device directly.
pvdisplay
--- Physical volume ---
PV Name /dev/mapper/ssdcache
VG Name Kvmvol
So I can see the underlying PV is as expected.
--- Volume group ---
VG Name Kvmvol
as expected
example LV before reboot:
--- Logical volume ---
LV Path /dev/Kvmvol/kvm266_img
LV Name kvm266_img
VG Name Kvmvol
LV UUID JGqgxy-9goC-F0Lo-RG4A-RDq2-VQsh-6vZ5js
LV Write Access read/write
LV Creation host, time some.hostname, 2015-07-31 22:36:46 +0200
LV Status available
open 1
LV Size 12.00 GiB
Current LE 96
Segments 1
Allocation inherit
Read ahead sectors auto
currently set to 256
Block device 253:106
--- Segments ---
Logical extents 0 to 95:
Type linear
Physical volume /dev/mapper/ssdcache
Physical extents 4712 to 4807
Example LV created after reboot:
--- Logical volume ---
LV Path /dev/Kvmvol/kvm270_img
LV Name kvm270_img
VG Name Kvmvol
LV UUID qRSQK2-cnFG-dvPS-DSAp-FT8d-AcTZ-FdOHGn
LV Write Access read/write
LV Creation host, time some.hostname, 2015-08-02 01:20:02 +0200
LV Status available
open 1
LV Size 3.00 GiB
Current LE 24
Segments 1
Allocation inherit
Read ahead sectors auto
currently set to 256
Block device 253:110
--- Segments ---
Logical extents 0 to 23:
Type linear
Physical volume /dev/mapper/ssdcache
Physical extents 4952 to 4975
If I run a quick disk test to write 1GB and monitor it with atop -d -a using the LV I just created I see the ssdcache fire up:
However on one that was created before the reboot:
It hits the back end device despite all attributes for the logical volume being identical.
Perhaps I am missing something?
I have the lvm.conf filter set:
filter = [ "r|/dev/sdb1|","r|/dev/sda4|" ]
however I doubt this is the issue as I would expect it to either work for all logical volumes or none given that it is the back end device for the physical volume that has the cache mapped.
Any suggestions even silly ones welcome!
The text was updated successfully, but these errors were encountered:
I don't understand why it happened but I found the issue so decided to post my work around for anyone that stumbles over this with the same issue:
if you run lsblk you will see that only the new LV's are listed under your cache device, by running 'lvchange --refresh /dev/yourVGname/yourLVname it puts it in the right place again and it starts hitting the cache again.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Strange issue,
Due to an unrelated bug I had to reboot a server which had been working without issue, reloaded the write back cache and noticed in 'atop -d -a' that all logical volumes are writing directly to the backend device (sata).
CentOS 6.6 KVM LVM
cache device name: ssdcache
backend drive: /dev/sdb1
ssd cache /dev/sda4
If I create a new VPS and obviously a new logical volume the new VPS hits the cache, all the ones created pre reboot hit the slow block device directly.
pvdisplay
--- Physical volume ---
PV Name /dev/mapper/ssdcache
VG Name Kvmvol
So I can see the underlying PV is as expected.
--- Volume group ---
VG Name Kvmvol
as expected
example LV before reboot:
--- Logical volume ---
LV Path /dev/Kvmvol/kvm266_img
LV Name kvm266_img
VG Name Kvmvol
LV UUID JGqgxy-9goC-F0Lo-RG4A-RDq2-VQsh-6vZ5js
LV Write Access read/write
LV Creation host, time some.hostname, 2015-07-31 22:36:46 +0200
LV Status available
open 1
LV Size 12.00 GiB
Current LE 96
Segments 1
Allocation inherit
Read ahead sectors auto
currently set to 256
Block device 253:106
--- Segments ---
Logical extents 0 to 95:
Type linear
Physical volume /dev/mapper/ssdcache
Physical extents 4712 to 4807
Example LV created after reboot:
--- Logical volume ---
LV Path /dev/Kvmvol/kvm270_img
LV Name kvm270_img
VG Name Kvmvol
LV UUID qRSQK2-cnFG-dvPS-DSAp-FT8d-AcTZ-FdOHGn
LV Write Access read/write
LV Creation host, time some.hostname, 2015-08-02 01:20:02 +0200
LV Status available
open 1
LV Size 3.00 GiB
Current LE 24
Segments 1
Allocation inherit
Read ahead sectors auto
currently set to 256
Block device 253:110
--- Segments ---
Logical extents 0 to 23:
Type linear
Physical volume /dev/mapper/ssdcache
Physical extents 4952 to 4975
If I run a quick disk test to write 1GB and monitor it with atop -d -a using the LV I just created I see the ssdcache fire up:
However on one that was created before the reboot:
It hits the back end device despite all attributes for the logical volume being identical.
Perhaps I am missing something?
I have the lvm.conf filter set:
filter = [ "r|/dev/sdb1|","r|/dev/sda4|" ]
however I doubt this is the issue as I would expect it to either work for all logical volumes or none given that it is the back end device for the physical volume that has the cache mapped.
Any suggestions even silly ones welcome!
The text was updated successfully, but these errors were encountered: