Skip to content

Move to a faster base64 implementation #19984

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

h-avsha
Copy link

@h-avsha h-avsha commented Jun 23, 2025

This PR switches the default python base64 to pybase64 which is an optimized implementation. For large payloads (images and embeddings), it shaves off tens of milliseconds per request.
(The plot compares only base64 performance, not the entire request. X axis is the image resolution).

Screenshot 2025-06-23 at 15 45 50

I tested that the data is equivalent to base64 with torch.allclose(). But we should run mutlimodal tests e.g. test_single_chat_session_image_base64encoded

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @h-avsha, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on a critical performance enhancement by replacing the standard Python base64 operations with the highly optimized pybase64 library. This migration is designed to accelerate the processing of large multimodal data, such as images and embeddings, thereby improving overall system efficiency and responsiveness.

Highlights

  • Performance Optimization: Switched the default base64 implementation from Python's built-in base64 module to the optimized pybase64 library. This change is expected to significantly improve performance for large payloads, such as images and embeddings, by shaving off tens of milliseconds per request.
  • Dependency Management: Added pybase64 as a new dependency in requirements/common.txt to enable the use of the faster base64 implementation.
  • Code Migration: Updated all relevant base64.b64encode and base64.b64decode calls within vllm/multimodal/image.py to utilize pybase64 functions. Specifically, pybase64.b64decode now includes validate=True for robust decoding.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added ci/build multi-modality Related to multi-modality (#4194) labels Jun 23, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

Thanks for this performance improvement! Switching to pybase64 is a great idea for handling large payloads.

The changes look good. I've left a couple of comments regarding the new validate=True flag in b64decode calls. This makes the input validation stricter, which is a good thing, but it's a change in behavior that's worth confirming.

As you mentioned in the PR description, running multimodal tests is important. It would be beneficial to add a new test case specifically for load_base64 with malformed base64 data to ensure the new validation behavior is covered and works as expected. This would help prevent regressions in the future.

@@ -55,7 +55,7 @@ def load_bytes(self, data: bytes) -> Image.Image:
return convert_image_mode(image, self.image_mode)

def load_base64(self, media_type: str, data: str) -> Image.Image:
return self.load_bytes(base64.b64decode(data))
return self.load_bytes(pybase64.b64decode(data, validate=True))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

By switching to pybase64.b64decode and setting validate=True, you've made the base64 decoding stricter. The previous implementation using base64.b64decode with its default settings would silently ignore non-base64 characters in the input string. The new implementation will raise an error.

This is a positive change for input validation and data integrity. I just want to confirm this change in behavior is intended. If so, this is a great improvement!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

validate=True is actually faster, since with validate=False, pybase64 will proactively filter the input for illegitimate characters, which entails some performance overhead

@@ -88,10 +88,10 @@ def load_bytes(self, data: bytes) -> torch.Tensor:
return torch.load(buffer, weights_only=True)

def load_base64(self, media_type: str, data: str) -> torch.Tensor:
return self.load_bytes(base64.b64decode(data))
return self.load_bytes(pybase64.b64decode(data, validate=True))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the change in ImagePixelMediaIO.load_base64, this change introduces stricter validation for base64-encoded image embeddings. The previous implementation would ignore invalid characters, while this one will raise an error.

This is a good enhancement for robustness. Confirming this is the intended behavior.

@h-avsha
Copy link
Author

h-avsha commented Jun 23, 2025

Hi @khluu, I would like to run vision tests (e.g. test_single_chat_session_image_base64encoded), but I don't have buildkite permissions

@DarkLight1337
Copy link
Member

Unblocking multimodal tests

Copy link
Contributor

@yewentao256 yewentao256 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea looks good to me, maybe we should add the "pybase" to the pip dependency list as well if it is not there?

@h-avsha
Copy link
Author

h-avsha commented Jun 23, 2025

Thanks @yewentao256 I thought that adding to requirements/common.txt is enough. Let me check

@h-avsha
Copy link
Author

h-avsha commented Jun 24, 2025

@yewentao256 hey, the build failures seem unrelated to my changes - can you please assist with this?

@yewentao256
Copy link
Contributor

@yewentao256 hey, the build failures seem unrelated to my changes - can you please assist with this?

Hi @h-avsha, you can simply merge main to your branch so that you can triger the CI again

@h-avsha
Copy link
Author

h-avsha commented Jun 24, 2025

Thanks @yewentao256, I think that there might actually be a real issue, debugging

@WoosukKwon WoosukKwon added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 24, 2025
Copy link
Collaborator

@WoosukKwon WoosukKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for the optimization!

@WoosukKwon WoosukKwon enabled auto-merge (squash) June 24, 2025 22:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/build multi-modality Related to multi-modality (#4194) ready ONLY add when PR is ready to merge/full CI is needed
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

4 participants