Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow not take snapshots recursively #6

Open
pando85 opened this issue Feb 19, 2018 · 14 comments
Open

Allow not take snapshots recursively #6

pando85 opened this issue Feb 19, 2018 · 14 comments

Comments

@pando85
Copy link

pando85 commented Feb 19, 2018

It takes snapshots for all childs but I'm using docker in my server and take that snapshots involve taking snapshots for all docker driver stuff.

zroot/e8155809157d4b464768dffea41e070a786dff531fba6647d7c788d98860d038@631710867
zroot/e8155809157d4b464768dffea41e070a786dff531fba6647d7c788d98860d038@pyznap_2018-02-19_18:43:57_monthly
zroot/e8155809157d4b464768dffea41e070a786dff531fba6647d7c788d98860d038@pyznap_2018-02-19_18:44:00_weekly
zroot/e8155809157d4b464768dffea41e070a786dff531fba6647d7c788d98860d038@pyznap_2018-02-19_18:43:54_yearly
zroot/e8155809157d4b464768dffea41e070a786dff531fba6647d7c788d98860d038@pyznap_2018-02-19_18:44:03_daily
zroot/e8155809157d4b464768dffea41e070a786dff531fba6647d7c788d98860d038

This doesn't make sense for me when I only want to backup my zroot volume.

It could be fix with a new config option like recursively = no.

Thanks for your software!

@yboetz
Copy link
Owner

yboetz commented Feb 19, 2018

Hm I specially made pyznap with recursive snapshots in mind. I think I would have to do some restructuring to allow for non-recursive snapshots. I'll have to think about this.

@driesmp
Copy link

driesmp commented Nov 11, 2018

Hi, I'm thinking that we need this more and more. A use case below;
Localy replicating zroot to storage/zroot.
Having other datasets under storage eg; storage/media storage/downloads...

Then the config could look like this, snapshotting everything under storage except storage/zroot.
Altough due to recursive snapshots this config is not possible, correct?

[zroot]
retention policy
snap = yes
clean =yes
dest = storage/zroot

[storage]
retention policy
snap = yes
clean = yes

[storage/zroot]
snap = no
clean = yes

@yboetz
Copy link
Owner

yboetz commented Nov 11, 2018

Yes, that will not be possible, as you would take additional snapshots in storage/zroot that would mess with zfs send/recv. In this case you would have to specify a retention policy for each dataset in storage, so

[storage/media]
retention policy
snap = yes
clean = yes

[storage/downloads]
retention policy
snap = yes
clean = yes

[storage/zroot]
snap = no
clean = yes

A bit more configuration, but now will work as intended.

The thing is, I specifically made pyznap with recursive snapshots in mind, as I wanted atomic snapshots across all children of a dataset. For that I have to use recursive snapshots. If I were to allow non-recursive snapshots, then this would not be the case anymore and I would have to go through each child dataset and take a snapshot if the policy says so. So this is a design choice that I'd like to keep like this. I would have to think if this is possible in any other way, maybe taking snapshots then immediately deleting the ones that shouldn't be recursive...

If you want to have snapshots in all child datasets except for ones then you could just specify it in your config like this

[storage]
retention policy
snap = yes
clean = yes

[storage/downloads]
frequently = 0
hourly = 0
...
snap = no
clean = yes

Now storage and all its children get snapshoted according to policy, including downloads, but whenever you clean snapshots all of the snapshots of downloads will be deleted. If you take snapshots with the pyznap snap --full option (or just pyznap snap), then snapshots will be taken recursively and then immediately deleted where you don't want them.

But this does not work for zfs send destinations, as you want to keep snapshots there and not take new ones. So for that you would have to do it like I described above.

Edit: Note that for the second example to work, you need to explicitly set all values to 0 in the retention policy to overwrite the parent settings. Options not set will be overwritten by parent values.

@driesmp
Copy link

driesmp commented Nov 11, 2018

I see, do you think you could maybe add an extra config option; eg "recursive = yes", this will take snapshots with -r. On the other hand when "recursive=no" is set, it loops through underlying datasets.

Just a thought as how you could possibly tackle this.

@yboetz
Copy link
Owner

yboetz commented Nov 11, 2018

Yes that might be a possibility. I'll have to see how hard that is to implement and how much time I have :).

@driesmp
Copy link

driesmp commented Nov 11, 2018

Thanks for considering

@redmop
Copy link

redmop commented Nov 12, 2018

Note that the following command actually works atomically (only within the same pool), so if the command can be built in the script (and it's not too long) it works.

zfs snapshot rpool/dataset1@snapshot rpool/dataset2/snapshot [-r]

@yboetz
Copy link
Owner

yboetz commented Nov 12, 2018

Hm that is quite interesting, thanks.

At the moment there is a 'ZFSDataset' class that has a 'snapshot' function that takes a snapshot of that dataset only (optionally with -r specified). So I would have to rewrite that a bit s.t. multiple snapshots across the same pool can be taken in one command.

@marcinkuk
Copy link

pyznap with recursive snapshot as an option can be the best backup solution.

I use proxmox and other systems with zfs.
I tried znapzend, sanoid/syncoid and other solution.

Znapzend - I have several servers but due to ZFS params maintenance is horrible and has not snap / send separation
Sanoid - very good but has not atomic snaps and capability to exclude sending unnecessary snaps
pyznap - can not create not recursive snapshots

@yboetz
Copy link
Owner

yboetz commented Mar 30, 2019

I have not had time yet, it's still on my to do list for pyznap... For now you can use the workaround described above. If you only want to take snapshots of child filesystems, you can also set up a policy similar to mine:

[rpool]
hourly = 24
daily = 7
weekly = 4
monthly = 6
snap = no
clean = yes

[rpool/ROOT/ubuntu]
snap = yes

[rpool/home]
snap = yes

[rpool/var/log]
snap = yes

[rpool/opt]
snap = yes

Here you specify the policy at the root (rpool) level, but set snap = no and then only activate the policy for child filesystems.

@beren12
Copy link

beren12 commented Jul 29, 2019

A few things… with zfs channel programs you can make atomic snapshots of everything and they don't have to be recursive
It's also a bad idea to use the root of the dataset for files if possible. There are bugs like space isn't freed until unmount/export and other edge cases. What I do is make a pool, set canmount=off, then make pool/files and have it mount overtop pool.

@yboetz
Copy link
Owner

yboetz commented Jul 30, 2019

As @redmop said, you can take atomic snapshots within the same pool by specifying

zfs snapshot rpool/dataset1@snapshot rpool/dataset2/snapshot [-r]

But this would need some restructuring of the python code, as at the moment every dataset is a class instance and calls its own snapshot method with the zfs snapshot dataset@snapname command. Putting multiple of those commands together is not possible in the current code.

@hb0nes
Copy link

hb0nes commented Aug 24, 2022

We have stuff in a pool that we want to snap, but not snap a child dataset which is a zfs send destination.
e.g.:

[srv]
snap = yes

[srv/wallet]
snap = no

This is currently impossible.

@portavales
Copy link

Here is a pull request adding the config option to run non-recursive snapshots.
#108
I am currently running it in my homelab system.

I basically list on the config each dataset target with the non-recursive option so that I can choose each sub-tree to snapshot differently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants