Meta Releases SAM 3.1: 7x Speed Increase for 128 Target Tracking, Real-Time Video Segmentation

robot
Abstract generation in progress

According to monitoring by 1M AI News, Meta’s Super Intelligence Lab has released the Segment Anything Model 3.1 (SAM 3.1), which is a direct upgrade from the visual segmentation foundational model SAM 3 released last November. The core improvement is Object Multiplex, a shared memory joint multi-target tracking method. Previously, SAM 3 required independent processing for each target, leading to a linear increase in inference costs with the number of targets. SAM 3.1 allows for the simultaneous processing of up to 16 targets in a single forward pass, achieving approximately 7 times inference acceleration when tracking 128 targets on a single H100 GPU, without sacrificing accuracy, by sharing global contextual information. Additionally, SAM 3.1 has shown performance improvements on 6 out of 7 video object segmentation (VOS) benchmarks. SAM 3.1 is a plug-and-play upgrade, and the model weights are available for download on Hugging Face.

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin