Stream upstream response to client on demand #27
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Previously, the
HttpProxy
eagerly consumed the upstream response before streaming it to the client. This was necessary because it was never guaranteed that the client would consume the whole response and thus pull it through the connection slot. If the response was not consumed, the upstream would be backpressured and the connection slot was never released back to the pool. Eventually the API gateway would run out of connection slots.In Akka HTTP 10.1.x, the new connection pool implementation automatically clears any slot that doesn't have its entity consumed. See
response-entity-subscription-timeout
for a description of this mechanism on https://doc.akka.io/docs/akka-http/current/configuration.html.Streaming the response continuously, instead of eagerly consuming it and then streaming it to the client, is preferable from both a memory consumption and response latency point of view.
RC release for now, to go through testing.