Memory consumption spikes when an api endpoint with file upload capability is hit. #5778
Unanswered
athujoshi24
asked this question in
Q&A
Replies: 1 comment
-
Hi @athujoshi24, to fix OOM issues with large file uploads, set app.config['MAX_CONTENT_LENGTH'] = 450 * 1024 * 1024 and override Request._get_file_stream to use disk-based TemporaryFile. Stream uploads in chunks with shutil.copyfileobj(data.stream, f, length=1610241024). Reduce Gunicorn workers to 1 (--workers 1 --preload) and increase Kubernetes memory to 4 GB. Consider NGINX for buffering or S3 presigned URLs for scalability. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have an API application, created using
connexion
framework, which usesFlaskApp
configuration. This app exposes an endpoint where users/consumers can upload artifacts(ranging from 2.5 MB to 400 MB) and they will be then upload to an internal storage platform.The pain point is, when the user selects a larger data file(~320 MB) to upload, the application often runs into OOM issue and eventually user receives 502 status response.
Running
dmesg
command inside containers show the gunicorn thread being killed byoom_killer
It would be of great help if someone here could guide in avoiding such OOM issues for Flask Applications.
API Specifaction
Python function
Pipfile
Application startup command
Filtered logs
System info:
In a thread regarding same issue on Connexion forum , I received following feedback: spec-first/connexion#2062 (comment)
What I'm looking for is recommendations on better implementation to avoid spike in Memory consumption.
Beta Was this translation helpful? Give feedback.
All reactions