ReadableFileHandleProtocol.readToEnd
can fail to read the complete contents of file sizes less than the single shot read limit
#2769
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Resolved an issue where
ReadableFileHandleProtocol.readToEnd
could fail to read the contents of files smaller than the single shot read limit (64 MiB).Motivation:
If
readToEnd
detects that the file in question is smaller than the single shot read limit, then it will read the file using a single call toreadChunk
, however, there isn't a guarantee thatreadChunk
will return the entire requested chunk. If this happens, thenreadToEnd
only returns the result of the first read and does not execute any followup reads.Modifications:
I separated this into two sections (two commits) because I found another issue that I had to resolve in order to fix the chunking problem.
First Commit
This is what is required to fix the missing chunk reads, but it causes
testReadFileAsChunks
to fail becausehandle.readChunks(in: ..., chunkLength: .bytes(128))
moves the file access position to the end, which means that the subsequenthandle.readToEnd(maximumSizeAllowed: .bytes(1024 * 1024))
reads zero bytes since the file is fully read, so we get a precondition failure when we runcontents.moveReaderIndex(forwardBy: 100)
because we're trying to move the reader index to 100 for a byte array of length zero.The problem is that when we initialize a
FileChunks
object, if the range is set to0..<Int.max
, we use the.entireFile
chunk range. This causesBufferedStream
to use aProducerState
with anil
range, which means that no seeking is done when reading chunks. It looks like this behavior is intended for the case where we want to read an unseekable file, but it's being inadvertently triggered when we request a chunked read of a whole file.TLDR: If we do any chunked read of a file, then try to do a chunked read of the entire file, the second read will begin where the first one left off instead of moving the pointer to the beginning of the file, despite the caller requesting a range starting at index zero.
Second Commit
ChunkRange
to have two modes:current
: reads from whatever the underlying file handle's offset currently is.specified
: reads from the specified range.ReadableFileHandleProtocol.readChunks
. This will trigger the use ofChunkRange.current
.ReadableFileHandleProtocol.readToEnd
when reading an unseekable file.testWriteAndReadUnseekableFile
: I think that this test was incorrect and there's no reason that we should not be able to read the contents of a fifo that we just wrote to.General Comment
Part of the reason I think this is happening is because the
readToEnd
function is a bit counter intuitive in that it has a default parameter of 0 forfromAbsoluteOffset
. When it's called using the default, it's not clear to the caller that it's going to go back to offset zero before reading (if the file is not a fifo). Maybe this should be changed to anil
default?Result:
readToEnd
should now return the full file contents when the file size is lower than the single shot read limit butreadChunk
does not return the entire requested chunk.