Skip to content

Checkpoints memory leak #1199

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

RTHilton
Copy link
Contributor

Resolve bug #1190 with DisposeMany call for ModelsCache
Reduce overall memory usage on checkpoints manager by loading SafetensorMetadata only when opening the metadata viewer

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @RTHilton, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

This pull request addresses bug #1190 by implementing a DisposeMany call for ModelsCache to resolve a memory leak. Additionally, it reduces overall memory usage in the checkpoints manager by loading SafetensorMetadata only when the metadata viewer is opened, rather than loading it eagerly.

Highlights

  • Memory Leak Fix: Implements DisposeMany for ModelsCache to resolve memory leak issue Checkpoint Manager RAM leakage #1190.
  • Performance Improvement: Loads SafetensorMetadata only when opening the metadata viewer, reducing overall memory usage.
  • Code Optimization: Removes unnecessary eager loading of Safetensor metadata, improving application responsiveness.

Changelog

Click here to see the changelog
  • StabilityMatrix.Avalonia/ViewModels/CheckpointManager/CheckpointFileViewModel.cs
    • Modified OpenSafetensorMetadataViewer to load SafetensorMetadata only when the viewer is opened.
    • Removed the initial parsing and caching of SafetensorMetadata to reduce memory footprint.
    • The safetensors metadata is now parsed every time the metadata viewer is opened, instead of being cached.
  • StabilityMatrix.Avalonia/ViewModels/CheckpointsPageViewModel.cs
    • Added DisposeMany() call to the observable collection of models to ensure proper disposal of resources and prevent memory leaks. Line 361
    • Refactored MoveBetweenFolders to improve readability and maintainability. Lines 846, 873, 877, 881, 886, 890
  • StabilityMatrix.Core/Models/Database/LocalModelFile.cs
    • Removed SafetensorMetadata and SafetensorMetadataParsed properties to prevent unnecessary data caching. Lines 157-161
  • StabilityMatrix.Core/Services/ModelIndexService.cs
    • Removed the LoadSafetensorMetadataAsync method and its call to prevent eager loading of safetensor metadata. Lines 548-627
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


Trivia time!

What is the purpose of the ConfigureAwait(false) method in asynchronous C# code?

Click here for the answer
`ConfigureAwait(false)` is used to avoid deadlocks when awaiting a task. It prevents the continuation of the task from being forced back onto the original context, allowing it to run on any available thread.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a memory leak in the Checkpoints manager by disposing of the ModelsCache and reduces memory usage by loading SafetensorMetadata only when the metadata viewer is opened. The changes look good overall, but I have a few suggestions for improvement.

Summary of Findings

  • Redundant Check: The original code had a redundant check for CheckpointFile.SafetensorMetadataParsed which has been removed in the updated code. This simplifies the logic and avoids unnecessary checks.
  • DisposeMany() usage: The DisposeMany() method is used to ensure that the CheckpointFileViewModel instances are properly disposed of when they are removed from the Models collection, preventing memory leaks.
  • Loading SafetensorMetadata: The SafetensorMetadata is now only loaded when the metadata viewer is opened, reducing overall memory usage. The code now directly parses the metadata instead of relying on cached values.

Merge Readiness

The pull request is well-structured and addresses the identified memory leak and reduces memory usage. The changes made are clear and improve the overall efficiency of the Checkpoints manager. I recommend merging this pull request after addressing the identified medium severity issue. Please have others review and approve this code before merging, as I am unable to do so.

@@ -358,6 +358,7 @@ or nameof(SortConnectedModelsFirst)
x
)
)
.DisposeMany()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This DisposeMany() call is great for ensuring that the CheckpointFileViewModel instances are properly disposed of when they are removed from the Models collection, preventing memory leaks. Good job!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant