-
-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Difficulty optimizing dependencies for AWS Lambda #297
Comments
My rule of thumb is to never compress JSON data. Having said that, we should be splitting this module into 2:
As well as updating This is quite a bit of work. Would you like to volunteer to do it, or would you like to sponsor it? |
Thanks for the fast response! Reason for me compressing JSON is that there's actually a lot of duplication going on in the data (e.g. ProseMirror JSON formatted data, objects with similar structures). Right now I can send over a 1MB blob of data in about 100kb using brotli. I tried a bunch of compression factors and landed on something that's still fast for most users. Most requests return within 80ms or so, but being able to control this on a per-route (or rather: size-based) mechanic would be nice (I know we can already do this, so keeping this feature is important to me). I only need to compress the data in certain endpoints, but I am also using tRPC inside of Fastify so I can't just add this as a middleware to the fastify routes that need it. Being able to only import the compression middleware (no decompression) would probably help a lot. I don't have the time to work on this, and also not the experience with fastify itself - would this be a one-time sponsorship or a repeating one (unclear on how the rules on this are)? |
I really don't have any input to provide here. I wouldn't use AWS Lambda for anything that would require any Fastify branded module. Lambda is a tool for background batch processing, in my opinion. |
Prerequisites
Issue
Hi there,
I am currently trying to make my lambda as lean as I can to reduce our cold start times. I noticed that this package pulls in quite a bit, and also uses a couple outdated dependencies.
All my findings are based on the metafile esbuild produces, to check which code actually gets pulled at the end.
mime-db
seems to be about 130kb in size, just to check which mime-types are compressible. My API only ever deals with JSON data, and also sits in front of an API Gateway, so I was wondering if there is a way to make this optional? (or at least lazy-loaded)string_decoder
andreadable-stream
. I tried to see if there is an easy way to upgrade them all, but even newer versions ofthrough2
only depend onreadable-stream@3
, whereas version 4 is already out. I am using thepino
logger with fastify, which pulls in the up-to-date versions of these packages.duplexify
:I only ever need to compress responses, never decompress the input. Are there simpler/leaner/easier ways to do this?
Thanks in advance!
The text was updated successfully, but these errors were encountered: