The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
The Segment Anything Model (SAM) is a computer vision model that produces high-quality object masks from input prompts like points or boxes, and can generate masks for all objects in an image. It was trained on a massive dataset of 11 million images and 1.1 billion masks, enabling strong zero-shot performance on various segmentation tasks. This repository provides code for running inference with SAM, links to download trained model checkpoints, and example notebooks demonstrating its usage.
How the donated funds are distributed
Kivach works on the Obyte network, and therefore you can track all donations.