We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the issue In the below documentation - https://github.com/protectai/modelscan/blob/main/docs/model_serialization_attacks.md#security-implication Reference to pickel being susceptible to remote code execution vulnerability is mentioned . What about Tensor SavedModel format ? Is this format secure . Refer link below
https://opensource.googleblog.com/2025/01/creating-safe-secure-ai-models.html
Relevant page Link the page that should be addressed
Expected behavior/text Guidance on usage of Tensor SavedModel . Do we need to worry about remote code execution vulnerability ?
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Describe the issue
In the below documentation - https://github.com/protectai/modelscan/blob/main/docs/model_serialization_attacks.md#security-implication
Reference to pickel being susceptible to remote code execution vulnerability is mentioned . What about Tensor SavedModel format ? Is this format secure . Refer link below
https://opensource.googleblog.com/2025/01/creating-safe-secure-ai-models.html
Relevant page
Link the page that should be addressed
Expected behavior/text
Guidance on usage of Tensor SavedModel . Do we need to worry about remote code execution vulnerability ?
The text was updated successfully, but these errors were encountered: