You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we support getting image extents for an entire image. With large
images on very fragmented file system, this can be very slow, delaying
backup, download disk, or upload disk (we use extents to determine the size).
With partial extents, we can get extents for part of the image, e.g. 1g, and
start downloading data for the received extents while getting the extents
for the next segment. This is the way other backends provide extents to
callers.
For example when you iterate over nbd backend extents:
for extents in backend.extents():
...
The backend does not get all the image extents at once. It get extents for
the first 2g, and start yielding them. When all extents are consumed, the
backend fetch extents for the next segment, until the end of the image
is reached.
From the client point of view this partial extent retrieval is transparent
and delays with large images and fragmented file systems are minimized.
We need to provide the same mechanism for the http backend. To implement this
we must have a way to specify ranges when getting extents.
Provide start= and length= query parameters to get partial extents.
Example requests:
# Get all extents from offset 0. length is (size - offset) if not
# specified.
GET /images/ticket-id/extents?context=zero&start=0
# Get extents from offset 0 to 1073741824
GET /images/ticket-id/extents?context=zero&start=0&length=1073741824
# Same, offset is 0 if not specified.
GET /images/ticket-id/extents?context=zero&length=1073741824
# Get extents from offset 52613349376 up to 53687091200
GET /images/ticket-id/extents?context=zero&offset=52613349376&length=1073741824
On the server side, pass the arguments to the backend without any
modification.
On the http backend, change extent() to get 1G extents per call
from the server, and get the next extents when needed.
Currently we cache extents in the client. I'm not sure how caching
should be implemented. We can remove the caching since no other
backend caches extents. The main reason for caching was not having
a good way to get image size, so we implemented size() using extents
request.
for partial extents.
Currently we support getting image extents for an entire image. With large
images on very fragmented file system, this can be very slow, delaying
backup, download disk, or upload disk (we use extents to determine the size).
With partial extents, we can get extents for part of the image, e.g. 1g, and
start downloading data for the received extents while getting the extents
for the next segment. This is the way other backends provide extents to
callers.
For example when you iterate over nbd backend extents:
The backend does not get all the image extents at once. It get extents for
the first 2g, and start yielding them. When all extents are consumed, the
backend fetch extents for the next segment, until the end of the image
is reached.
From the client point of view this partial extent retrieval is transparent
and delays with large images and fragmented file systems are minimized.
We need to provide the same mechanism for the http backend. To implement this
we must have a way to specify ranges when getting extents.
Provide start= and length= query parameters to get partial extents.
Example requests:
On the server side, pass the arguments to the backend without any
modification.
On the http backend, change extent() to get 1G extents per call
from the server, and get the next extents when needed.
Currently we cache extents in the client. I'm not sure how caching
should be implemented. We can remove the caching since no other
backend caches extents. The main reason for caching was not having
a good way to get image size, so we implemented size() using extents
request.
for partial extents.
Original bug: https://bugzilla.redhat.com/1924940
The text was updated successfully, but these errors were encountered: