You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our project performs on par with the original SAM and keeps exactly the same pipeline as the original SAM except for a change on the image encode, therefore, it is easy to Integrate into any project.
MobileSAM is around 60 times smaller and around 50 times faster than original SAM, and it is around 7 times smaller and around 5 times faster than the concurrent FastSAM. The comparison of the whole pipeline is summarzed as follows:
Best Wishes,
Qiao
The text was updated successfully, but these errors were encountered:
Sharing my observations on this: With mobileSAM, the inference improvements are not as good (only 2x improvement due to the lightweight image encoder) as mobileSAM is not optimised for everything mode which segment-anything-with-clip uses, using FastSAM might give faster inference but i have not tested this
This is due to the everything mode performing the decoder step n_points_per_side * n_points_per_side times
Reference: https://github.com/ChaoningZhang/MobileSAM
Our project performs on par with the original SAM and keeps exactly the same pipeline as the original SAM except for a change on the image encode, therefore, it is easy to Integrate into any project.
MobileSAM is around 60 times smaller and around 50 times faster than original SAM, and it is around 7 times smaller and around 5 times faster than the concurrent FastSAM. The comparison of the whole pipeline is summarzed as follows:
Best Wishes,
Qiao
The text was updated successfully, but these errors were encountered: