-
-
Notifications
You must be signed in to change notification settings - Fork 8.3k
Move to a faster base64 implementation #19984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @h-avsha, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request focuses on a critical performance enhancement by replacing the standard Python base64 operations with the highly optimized pybase64
library. This migration is designed to accelerate the processing of large multimodal data, such as images and embeddings, thereby improving overall system efficiency and responsiveness.
Highlights
- Performance Optimization: Switched the default base64 implementation from Python's built-in
base64
module to the optimizedpybase64
library. This change is expected to significantly improve performance for large payloads, such as images and embeddings, by shaving off tens of milliseconds per request. - Dependency Management: Added
pybase64
as a new dependency inrequirements/common.txt
to enable the use of the faster base64 implementation. - Code Migration: Updated all relevant
base64.b64encode
andbase64.b64decode
calls withinvllm/multimodal/image.py
to utilizepybase64
functions. Specifically,pybase64.b64decode
now includesvalidate=True
for robust decoding.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
Thanks for this performance improvement! Switching to pybase64
is a great idea for handling large payloads.
The changes look good. I've left a couple of comments regarding the new validate=True
flag in b64decode
calls. This makes the input validation stricter, which is a good thing, but it's a change in behavior that's worth confirming.
As you mentioned in the PR description, running multimodal tests is important. It would be beneficial to add a new test case specifically for load_base64
with malformed base64 data to ensure the new validation behavior is covered and works as expected. This would help prevent regressions in the future.
@@ -55,7 +55,7 @@ def load_bytes(self, data: bytes) -> Image.Image: | |||
return convert_image_mode(image, self.image_mode) | |||
|
|||
def load_base64(self, media_type: str, data: str) -> Image.Image: | |||
return self.load_bytes(base64.b64decode(data)) | |||
return self.load_bytes(pybase64.b64decode(data, validate=True)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By switching to pybase64.b64decode
and setting validate=True
, you've made the base64 decoding stricter. The previous implementation using base64.b64decode
with its default settings would silently ignore non-base64 characters in the input string. The new implementation will raise an error.
This is a positive change for input validation and data integrity. I just want to confirm this change in behavior is intended. If so, this is a great improvement!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
validate=True is actually faster, since with validate=False, pybase64 will proactively filter the input for illegitimate characters, which entails some performance overhead
@@ -88,10 +88,10 @@ def load_bytes(self, data: bytes) -> torch.Tensor: | |||
return torch.load(buffer, weights_only=True) | |||
|
|||
def load_base64(self, media_type: str, data: str) -> torch.Tensor: | |||
return self.load_bytes(base64.b64decode(data)) | |||
return self.load_bytes(pybase64.b64decode(data, validate=True)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the change in ImagePixelMediaIO.load_base64
, this change introduces stricter validation for base64-encoded image embeddings. The previous implementation would ignore invalid characters, while this one will raise an error.
This is a good enhancement for robustness. Confirming this is the intended behavior.
Hi @khluu, I would like to run vision tests (e.g. test_single_chat_session_image_base64encoded), but I don't have buildkite permissions |
Unblocking multimodal tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea looks good to me, maybe we should add the "pybase" to the pip dependency list as well if it is not there?
Thanks @yewentao256 I thought that adding to requirements/common.txt is enough. Let me check |
@yewentao256 hey, the build failures seem unrelated to my changes - can you please assist with this? |
Signed-off-by: h-avsha <[email protected]>
Signed-off-by: h-avsha <[email protected]>
Hi @h-avsha, you can simply merge main to your branch so that you can triger the CI again |
Thanks @yewentao256, I think that there might actually be a real issue, debugging |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks for the optimization!
This PR switches the default python base64 to pybase64 which is an optimized implementation. For large payloads (images and embeddings), it shaves off tens of milliseconds per request.
(The plot compares only base64 performance, not the entire request. X axis is the image resolution).
I tested that the data is equivalent to base64 with torch.allclose(). But we should run mutlimodal tests e.g. test_single_chat_session_image_base64encoded