# SAM-HQ[[sam_hq]]

## 개요[[overview]]

SAM-HQ (High-Quality Segment Anything Model)는 Lei Ke, Mingqiao Ye, Martin Danelljan, Yifan Liu, Yu-Wing Tai, Chi-Keung Tang, Fisher Yu가 제안한 [Segment Anything in High Quality](https://huggingface.co/papers/2306.01567) 논문에서 소개되었습니다.

이 모델은 기존 SAM(Segment Anything Model)의 향상된 버전입니다. SAM-HQ는 SAM의 핵심 장점인 프롬프트 기반 설계, 효율성, 제로샷 일반화 능력을 그대로 유지하면서도 훨씬 더 높은 품질의 분할 마스크를 생성하는 것이 특징입니다.

![example image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-output.png)

SAM-HQ는 기존 SAM 모델 대비 다음과 같은 5가지 핵심 개선 사항을 도입했습니다.

1. 고품질 출력 토큰: SAM-HQ는 SAM의 마스크 디코더에 학습 가능한 토큰을 주입합니다. 이 토큰은 모델이 더 높은 품질의 분할 마스크를 예측하도록 돕는 핵심적인 요소입니다.
2. 전역-지역 특징 융합: 모델의 서로 다른 단계에서 추출된 특징들을 결합하여 분할 마스크의 세부적인 정확도를 향상시킵니다. 이미지의 전체적인 맥락 정보와 객체의 미세한 경계 정보를 함께 활용하여 마스크 품질을 개선합니다.
3. 훈련 데이터 개선: SAM 모델이 SA-1B와 같은 대규모 데이터를 사용한 것과 달리, SAM-HQ는 신중하게 선별된 44,000개의 고품질 마스크로 구성된 데이터셋을 사용하여 훈련됩니다.
4. 높은 효율성: 마스크 품질을 상당히 개선했음에도 불구하고, 추가된 매개변수는 단 0.5%에 불과합니다.
5. 제로샷 성능: SAM-HQ는 성능이 개선되었음에도 불구하고, SAM 모델의 강력한 제로샷 일반화 능력을 그대로 유지합니다.

논문 초록 내용:

* 최근 발표된 SAM(Segment Anything Model)은 분할 모델의 규모를 확장하는 데 있어 획기적인 발전이며, 강력한 제로샷 기능과 유연한 프롬프트 기능을 제공합니다. 하지만 SAM은 11억 개의 마스크로 훈련되었음에도 불구하고, 특히 복잡하고 정교한 구조를 가진 객체를 분할할 때 마스크 예측 품질이 미흡한 경우가 많습니다. 저희는 HQ-SAM을 제안하며, SAM의 기존 장점인 프롬프트 기반 설계, 효율성, 제로샷 일반화 능력을 모두 유지하면서도 어떤 객체든 정확하게 분할할 수 있는 능력을 부여합니다. 저희는 신중한 설계를 통해 SAM의 사전 훈련된 모델 가중치를 재사용하고 보존하며 최소한의 추가적인 매개변수와 연산만을 도입했습니다. 핵심적으로 저희는 학습 가능한 고품질 출력 토큰을 설계했습니다. 이 토큰은 SAM의 마스크 디코더에 주입되어 고품질 마스크를 예측하는 역할을 담당합니다. 마스크의 세부 사항을 개선하기 위해 이 토큰을 마스크 디코더 특징에만 적용하는 것이 아니라 초기 및 최종 ViT 특징과 먼저 융합하여 사용합니다. 도입된 학습 가능한 매개변수를 훈련하기 위해 저희는 여러 출처에서 가져온 44,000개의 미세 조정된 마스크 데이터셋을 구성했습니다. HQ-SAM은 오직 이 44,000개 마스크 데이터셋만으로 훈련되며 GPU 8대를 사용했을 때 단 4시간이 소요됩니다.

SAM-HQ 사용 팁:

- SAM-HQ는 기존 SAM 모델보다 더 높은 품질의 마스크 생성하며, 특히 복잡한 구조와 미세한 세부 사항을 가진 객체에 대해 성능이 우수합니다.
- 이 모델은 더욱 정확한 경계와 얇은 구조에 대한 더 나은 처리 능력을 갖춘 이진 마스크를 예측합니다.
- SAM과 마찬가지로 모델은 입력으로 2차원 포인트 및 바운딩 박스를 사용할 때 더 좋은 성능을 보입니다.
- 하나의 이미지에 대해 다수의 포인트를 프롬프트로 입력하여 단일의 고품질 마스크를 예측할 수 있습니다.
- 이 모델은 SAM의 제로샷 일반화 능력을 그대로 유지합니다.
- SAM-HQ는 SAM 대비 약 0.5%의 추가 매개변수만을 가집니다.
- 현재 모델의 미세 조정은 지원되지 않습니다.

이 모델은 [sushmanth](https://huggingface.co/sushmanth)님께서 기여해주셨습니다.
원본 코드는 [여기](https://github.com/SysCV/SAM-HQ)에서 확인하실 수 있습니다.

아래는 이미지와 2차원 포인트가 주어졌을 때, 마스크를 생성하는 방법에 대한 예시입니다.

```python
import torch
from PIL import Image
import requests
from transformers import infer_device, SamHQModel, SamHQProcessor

device = infer_device()
model = SamHQModel.from_pretrained("syscv-community/sam-hq-vit-base").to(device)
processor = SamHQProcessor.from_pretrained("syscv-community/sam-hq-vit-base")

img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
input_points = [[[450, 600]]]  # 이미지 내 창문의 2차원 위치

inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to(model.device)
with torch.no_grad():
    outputs = model(**inputs)

masks = processor.image_processor.post_process_masks(
    outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu()
)
scores = outputs.iou_scores
```

또한, 프로세서에서 입력 이미지와 함께 사용자의 마스크를 직접 처리하여 모델에 전달할 수도 있습니다.

```python
import torch
from PIL import Image
import requests
from transformers import infer_device, SamHQModel, SamHQProcessor

device = infer_device()
model = SamHQModel.from_pretrained("syscv-community/sam-hq-vit-base").to(device)
processor = SamHQProcessor.from_pretrained("syscv-community/sam-hq-vit-base")

img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
mask_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
segmentation_map = Image.open(requests.get(mask_url, stream=True).raw).convert("1")
input_points = [[[450, 600]]]  # 이미지 내 창문의 2차원 위치

inputs = processor(raw_image, input_points=input_points, segmentation_maps=segmentation_map, return_tensors="pt").to(model.device)
with torch.no_grad():
    outputs = model(**inputs)

masks = processor.image_processor.post_process_masks(
    outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu()
)
scores = outputs.iou_scores
```

## 자료[[resources]]

다음은 SAM-HQ 사용을 시작하는 데 도움이 되는 공식 Hugging Face 및 커뮤니티 (🌎로 표시) 자료 목록입니다.

- 모델 사용을 위한 데모 노트북 (출시 예정)
- 논문 구현 및 코드: [SAM-HQ 깃허브 저장소](https://github.com/SysCV/SAM-HQ)

## SamHQConfig[[transformers.SamHQConfig]][[transformers.SamHQConfig]]

#### transformers.SamHQConfig[[transformers.SamHQConfig]]

[Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/sam_hq/configuration_sam_hq.py#L153)

This is the configuration class to store the configuration of a SamHQModel. It is used to instantiate a Sam Hq
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the [syscv-community/sam-hq-vit-base](https://huggingface.co/syscv-community/sam-hq-vit-base)

Configuration objects inherit from [PreTrainedConfig](/docs/transformers/main/ko/main_classes/configuration#transformers.PreTrainedConfig) and can be used to control the model outputs. Read the
documentation from [PreTrainedConfig](/docs/transformers/main/ko/main_classes/configuration#transformers.PreTrainedConfig) for more information.

**Parameters:**

vision_config (`Union[dict, ~configuration_utils.PreTrainedConfig]`, *optional*) : The config object or dictionary of the vision backbone.

prompt_encoder_config (Union[`dict`, `SamHQPromptEncoderConfig`], *optional*) : Dictionary of configuration options used to initialize [SamHQPromptEncoderConfig](/docs/transformers/main/ko/model_doc/sam_hq#transformers.SamHQPromptEncoderConfig).

mask_decoder_config (Union[`dict`, `SamHQMaskDecoderConfig`], *optional*) : Dictionary of configuration options used to initialize [SamHQMaskDecoderConfig](/docs/transformers/main/ko/model_doc/sam_hq#transformers.SamHQMaskDecoderConfig).

initializer_range (`float`, *optional*, defaults to `0.02`) : The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

tie_word_embeddings (`bool`, *optional*, defaults to `True`) : Whether to tie weight embeddings according to model's `tied_weights_keys` mapping.

## SamHQVisionConfig[[transformers.SamHQVisionConfig]][[transformers.SamHQVisionConfig]]

#### transformers.SamHQVisionConfig[[transformers.SamHQVisionConfig]]

[Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/sam_hq/configuration_sam_hq.py#L54)

This is the configuration class to store the configuration of a SamHQModel. It is used to instantiate a Sam Hq
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the [syscv-community/sam-hq-vit-base](https://huggingface.co/syscv-community/sam-hq-vit-base)

Configuration objects inherit from [PreTrainedConfig](/docs/transformers/main/ko/main_classes/configuration#transformers.PreTrainedConfig) and can be used to control the model outputs. Read the
documentation from [PreTrainedConfig](/docs/transformers/main/ko/main_classes/configuration#transformers.PreTrainedConfig) for more information.

Example:

```python
>>> from transformers import (
...     SamHQVisionConfig,
...     SamHQVisionModel,
... )

>>> # Initializing a SamHQVisionConfig with `"facebook/sam_hq-vit-huge"` style configuration
>>> configuration = SamHQVisionConfig()

>>> # Initializing a SamHQVisionModel (with random weights) from the `"facebook/sam_hq-vit-huge"` style configuration
>>> model = SamHQVisionModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config
```

**Parameters:**

hidden_size (`int`, *optional*, defaults to `768`) : Dimension of the hidden representations.

output_channels (`int`, *optional*, defaults to 256) : Dimensionality of the output channels in the Patch Encoder.

num_hidden_layers (`int`, *optional*, defaults to `12`) : Number of hidden layers in the Transformer decoder.

num_attention_heads (`int`, *optional*, defaults to `12`) : Number of attention heads for each attention layer in the Transformer decoder.

num_channels (`int`, *optional*, defaults to `3`) : The number of input channels.

image_size (`Union[int, list[int], tuple[int, int]]`, *optional*, defaults to `1024`) : The size (resolution) of each image.

patch_size (`Union[int, list[int], tuple[int, int]]`, *optional*, defaults to `16`) : The size (resolution) of each patch.

hidden_act (`str`, *optional*, defaults to `gelu`) : The non-linear activation function (function or string) in the decoder. For example, `"gelu"`, `"relu"`, `"silu"`, etc.

layer_norm_eps (`float`, *optional*, defaults to `1e-06`) : The epsilon used by the layer normalization layers.

attention_dropout (`Union[float, int]`, *optional*, defaults to `0.0`) : The dropout ratio for the attention probabilities.

initializer_range (`float`, *optional*, defaults to `1e-10`) : The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

qkv_bias (`bool`, *optional*, defaults to `True`) : Whether to add a bias to the queries, keys and values.

mlp_ratio (`float`, *optional*, defaults to `4.0`) : Ratio of the MLP hidden dim to the embedding dim.

use_abs_pos (`bool`, *optional*, defaults to `True`) : Whether to use absolute position embeddings.

use_rel_pos (`bool`, *optional*, defaults to `True`) : Whether to use relative position embedding.

window_size (`int`, *optional*, defaults to 14) : Window size for relative position.

global_attn_indexes (`list[int]`, *optional*, defaults to `[2, 5, 8, 11]`) : The indexes of the global attention layers.

num_pos_feats (`int`, *optional*, defaults to 128) : The dimensionality of the position embedding.

mlp_dim (`int`, *optional*) : The dimensionality of the MLP layer in the Transformer encoder. If `None`, defaults to `mlp_ratio * hidden_size`.

## SamHQMaskDecoderConfig[[transformers.SamHQMaskDecoderConfig]][[transformers.SamHQMaskDecoderConfig]]

#### transformers.SamHQMaskDecoderConfig[[transformers.SamHQMaskDecoderConfig]]

[Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/sam_hq/configuration_sam_hq.py#L119)

This is the configuration class to store the configuration of a SamHQModel. It is used to instantiate a Sam Hq
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the [syscv-community/sam-hq-vit-base](https://huggingface.co/syscv-community/sam-hq-vit-base)

Configuration objects inherit from [PreTrainedConfig](/docs/transformers/main/ko/main_classes/configuration#transformers.PreTrainedConfig) and can be used to control the model outputs. Read the
documentation from [PreTrainedConfig](/docs/transformers/main/ko/main_classes/configuration#transformers.PreTrainedConfig) for more information.

**Parameters:**

hidden_size (`int`, *optional*, defaults to `256`) : Dimension of the hidden representations.

hidden_act (`str`, *optional*, defaults to `relu`) : The non-linear activation function (function or string) in the decoder. For example, `"gelu"`, `"relu"`, `"silu"`, etc.

mlp_dim (`int`, *optional*, defaults to 2048) : Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.

num_hidden_layers (`int`, *optional*, defaults to `2`) : Number of hidden layers in the Transformer decoder.

num_attention_heads (`int`, *optional*, defaults to `8`) : Number of attention heads for each attention layer in the Transformer decoder.

attention_downsample_rate (`int`, *optional*, defaults to 2) : The downsampling rate of the attention layer.

num_multimask_outputs (`int`, *optional*, defaults to 3) : The number of outputs from the `SamMaskDecoder` module. In the Segment Anything paper, this is set to 3.

iou_head_depth (`int`, *optional*, defaults to 3) : The number of layers in the IoU head module.

iou_head_hidden_dim (`int`, *optional*, defaults to 256) : The dimensionality of the hidden states in the IoU head module.

layer_norm_eps (`float`, *optional*, defaults to `1e-06`) : The epsilon used by the layer normalization layers.

vit_dim (`int`, *optional*, defaults to 768) : Dimensionality of the Vision Transformer (ViT) used in the `SamHQMaskDecoder` module.

## SamHQPromptEncoderConfig[[transformers.SamHQPromptEncoderConfig]][[transformers.SamHQPromptEncoderConfig]]

#### transformers.SamHQPromptEncoderConfig[[transformers.SamHQPromptEncoderConfig]]

[Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/sam_hq/configuration_sam_hq.py#L29)

This is the configuration class to store the configuration of a SamHQModel. It is used to instantiate a Sam Hq
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the [syscv-community/sam-hq-vit-base](https://huggingface.co/syscv-community/sam-hq-vit-base)

Configuration objects inherit from [PreTrainedConfig](/docs/transformers/main/ko/main_classes/configuration#transformers.PreTrainedConfig) and can be used to control the model outputs. Read the
documentation from [PreTrainedConfig](/docs/transformers/main/ko/main_classes/configuration#transformers.PreTrainedConfig) for more information.

**Parameters:**

hidden_size (`int`, *optional*, defaults to `256`) : Dimension of the hidden representations.

image_size (`Union[int, list[int], tuple[int, int]]`, *optional*, defaults to `1024`) : The size (resolution) of each image.

patch_size (`Union[int, list[int], tuple[int, int]]`, *optional*, defaults to `16`) : The size (resolution) of each patch.

mask_input_channels (`int`, *optional*, defaults to 16) : The number of channels to be fed to the `MaskDecoder` module.

num_point_embeddings (`int`, *optional*, defaults to 4) : The number of point embeddings to be used.

hidden_act (`str`, *optional*, defaults to `gelu`) : The non-linear activation function (function or string) in the decoder. For example, `"gelu"`, `"relu"`, `"silu"`, etc.

layer_norm_eps (`float`, *optional*, defaults to `1e-06`) : The epsilon used by the layer normalization layers.

## SamHQProcessor[[transformers.SamHQProcessor]][[transformers.SamHQProcessor]]

#### transformers.SamHQProcessor[[transformers.SamHQProcessor]]

[Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/sam_hq/processing_sam_hq.py#L83)

Constructs a SamHQProcessor which wraps a image processor into a single processor.

[SamHQProcessor](/docs/transformers/main/ko/model_doc/sam_hq#transformers.SamHQProcessor) offers all the functionalities of `SamImageProcessor`. See the
`~SamImageProcessor` for more information.

**Parameters:**

image_processor (`SamImageProcessor`) : The image processor is a required input.

## SamHQVisionModel[[transformers.SamHQVisionModel]][[transformers.SamHQVisionModel]]

#### transformers.SamHQVisionModel[[transformers.SamHQVisionModel]]

[Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/sam_hq/modeling_sam_hq.py#L1075)

The vision model from SamHQ without any head or projection on top.

This model inherits from [PreTrainedModel](/docs/transformers/main/ko/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)

This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.

forwardtransformers.SamHQVisionModel.forwardhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/sam_hq/modeling_sam_hq.py#L1087[{"name": "pixel_values", "val": ": torch.FloatTensor | None = None"}, {"name": "**kwargs", "val": ": typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs]"}]- **pixel_values** (`torch.FloatTensor` of shape `(batch_size, num_channels, image_size, image_size)`, *optional*) --
  The tensors corresponding to the input images. Pixel values can be obtained using
  `SamImageProcessor`. See `SamImageProcessor.__call__()` for details ([SamHQProcessor](/docs/transformers/main/ko/model_doc/sam_hq#transformers.SamHQProcessor) uses
  `SamImageProcessor` for processing images).0`SamHQVisionEncoderOutput` or `tuple(torch.FloatTensor)`A `SamHQVisionEncoderOutput` or a tuple of
`torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various
elements depending on the configuration ([SamHQConfig](/docs/transformers/main/ko/model_doc/sam_hq#transformers.SamHQConfig)) and inputs.
The [SamHQVisionModel](/docs/transformers/main/ko/model_doc/sam_hq#transformers.SamHQVisionModel) forward method, overrides the `__call__` special method.

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

- **image_embeds** (`torch.FloatTensor` of shape `(batch_size, output_dim)` *optional* returned when model is initialized with `with_projection=True`) -- The image embeddings obtained by applying the projection layer to the pooler_output.
- **last_hidden_state** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*, defaults to `None`) -- Sequence of hidden-states at the output of the last layer of the model.
- **hidden_states** (`tuple[torch.FloatTensor, ...]`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
  one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.

  Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
- **attentions** (`tuple[torch.FloatTensor, ...]`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
  sequence_length)`.

  Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
  heads.
- **intermediate_embeddings** (`list(torch.FloatTensor)`, *optional*) -- A list of intermediate embeddings collected from certain blocks within the model, typically those without
  windowed attention. Each element in the list is of shape `(batch_size, sequence_length, hidden_size)`.
  This is specific to SAM-HQ and not present in base SAM.

**Parameters:**

config ([SamHQVisionConfig](/docs/transformers/main/ko/model_doc/sam_hq#transformers.SamHQVisionConfig)) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/main/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.

**Returns:**

``SamHQVisionEncoderOutput` or `tuple(torch.FloatTensor)``

A `SamHQVisionEncoderOutput` or a tuple of
`torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various
elements depending on the configuration ([SamHQConfig](/docs/transformers/main/ko/model_doc/sam_hq#transformers.SamHQConfig)) and inputs.

## SamHQModel[[transformers.SamHQModel]][[transformers.SamHQModel]]

#### transformers.SamHQModel[[transformers.SamHQModel]]

[Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/sam_hq/modeling_sam_hq.py#L1233)

Segment Anything Model HQ (SAM-HQ) for generating masks, given an input image and optional 2D location and bounding boxes.

This model inherits from [PreTrainedModel](/docs/transformers/main/ko/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)

This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.

forwardtransformers.SamHQModel.forwardhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/sam_hq/modeling_sam_hq.py#L1317[{"name": "pixel_values", "val": ": torch.FloatTensor | None = None"}, {"name": "input_points", "val": ": torch.FloatTensor | None = None"}, {"name": "input_labels", "val": ": torch.LongTensor | None = None"}, {"name": "input_boxes", "val": ": torch.FloatTensor | None = None"}, {"name": "input_masks", "val": ": torch.LongTensor | None = None"}, {"name": "image_embeddings", "val": ": torch.FloatTensor | None = None"}, {"name": "multimask_output", "val": ": bool = True"}, {"name": "hq_token_only", "val": ": bool = False"}, {"name": "attention_similarity", "val": ": torch.FloatTensor | None = None"}, {"name": "target_embedding", "val": ": torch.FloatTensor | None = None"}, {"name": "intermediate_embeddings", "val": ": list[torch.FloatTensor] | None = None"}, {"name": "**kwargs", "val": ": typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs]"}]- **pixel_values** (`torch.FloatTensor` of shape `(batch_size, num_channels, image_size, image_size)`, *optional*) --
  The tensors corresponding to the input images. Pixel values can be obtained using
  `SamImageProcessor`. See `SamImageProcessor.__call__()` for details ([SamHQProcessor](/docs/transformers/main/ko/model_doc/sam_hq#transformers.SamHQProcessor) uses
  `SamImageProcessor` for processing images).
- **input_points** (`torch.FloatTensor` of shape `(batch_size, num_points, 2)`) --
  Input 2D spatial points, this is used by the prompt encoder to encode the prompt. Generally yields to much
  better results. The points can be obtained by passing a list of list of list to the processor that will
  create corresponding `torch` tensors of dimension 4. The first dimension is the image batch size, the
  second dimension is the point batch size (i.e. how many segmentation masks do we want the model to predict
  per input point), the third dimension is the number of points per segmentation mask (it is possible to pass
  multiple points for a single mask), and the last dimension is the x (vertical) and y (horizontal)
  coordinates of the point. If a different number of points is passed either for each image, or for each
  mask, the processor will create "PAD" points that will correspond to the (0, 0) coordinate, and the
  computation of the embedding will be skipped for these points using the labels.
- **input_labels** (`torch.LongTensor` of shape `(batch_size, point_batch_size, num_points)`) --
  Input labels for the points, this is used by the prompt encoder to encode the prompt. According to the
  official implementation, there are 3 types of labels

  - `1`: the point is a point that contains the object of interest
  - `0`: the point is a point that does not contain the object of interest
  - `-1`: the point corresponds to the background

  We added the label:

  - `-10`: the point is a padding point, thus should be ignored by the prompt encoder

  The padding labels should be automatically done by the processor.
- **input_boxes** (`torch.FloatTensor` of shape `(batch_size, num_boxes, 4)`) --
  Input boxes for the points, this is used by the prompt encoder to encode the prompt. Generally yields to
  much better generated masks. The boxes can be obtained by passing a list of list of list to the processor,
  that will generate a `torch` tensor, with each dimension corresponding respectively to the image batch
  size, the number of boxes per image and the coordinates of the top left and bottom right point of the box.
  In the order (`x1`, `y1`, `x2`, `y2`):

  - `x1`: the x coordinate of the top left point of the input box
  - `y1`: the y coordinate of the top left point of the input box
  - `x2`: the x coordinate of the bottom right point of the input box
  - `y2`: the y coordinate of the bottom right point of the input box
- **input_masks** (`torch.FloatTensor` of shape `(batch_size, image_size, image_size)`) --
  SAM_HQ model also accepts segmentation masks as input. The mask will be embedded by the prompt encoder to
  generate a corresponding embedding, that will be fed later on to the mask decoder. These masks needs to be
  manually fed by the user, and they need to be of shape (`batch_size`, `image_size`, `image_size`).
- **image_embeddings** (`torch.FloatTensor` of shape `(batch_size, output_channels, window_size, window_size)`) --
  Image embeddings, this is used by the mask decder to generate masks and iou scores. For more memory
  efficient computation, users can first retrieve the image embeddings using the `get_image_embeddings`
  method, and then feed them to the `forward` method instead of feeding the `pixel_values`.
- **multimask_output** (`bool`, *optional*) --
  In the original implementation and paper, the model always outputs 3 masks per image (or per point / per
  bounding box if relevant). However, it is possible to just output a single mask, that corresponds to the
  "best" mask, by specifying `multimask_output=False`.
- **hq_token_only** (`bool`, *optional*, defaults to `False`) --
  Whether to use only the HQ token path for mask generation. When False, combines both standard and HQ paths.
  This is specific to SAM-HQ's architecture.
- **attention_similarity** (`torch.FloatTensor`, *optional*) --
  Attention similarity tensor, to be provided to the mask decoder for target-guided attention in case the
  model is used for personalization as introduced in [PerSAM](https://huggingface.co/papers/2305.03048).
- **target_embedding** (`torch.FloatTensor`, *optional*) --
  Embedding of the target concept, to be provided to the mask decoder for target-semantic prompting in case
  the model is used for personalization as introduced in [PerSAM](https://huggingface.co/papers/2305.03048).
- **intermediate_embeddings** (`List[torch.FloatTensor]`, *optional*) --
  Intermediate embeddings from vision encoder's non-windowed blocks, used by SAM-HQ for enhanced mask quality.
  Required when providing pre-computed image_embeddings instead of pixel_values.0`list[dict[str, torch.Tensor]]`
The [SamHQModel](/docs/transformers/main/ko/model_doc/sam_hq#transformers.SamHQModel) forward method, overrides the `__call__` special method.

Although the recipe for forward pass needs to be defined within this function, one should call the `Module`
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.

Example:

```python
>>> from PIL import Image
>>> import httpx
>>> from io import BytesIO
>>> from transformers import AutoModel, AutoProcessor

>>> model = AutoModel.from_pretrained("sushmanth/sam_hq_vit_b")
>>> processor = AutoProcessor.from_pretrained("sushmanth/sam_hq_vit_b")

>>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-car.png"
>>> with httpx.stream("GET", url) as response:
...     image = Image.open(BytesIO(response.read())).convert("RGB")
>>> input_points = [[[400, 650]]]  # 2D location of a window on the car
>>> inputs = processor(images=image, input_points=input_points, return_tensors="pt")

>>> # Get high-quality segmentation mask
>>> outputs = model(**inputs)

>>> # For high-quality mask only
>>> outputs = model(**inputs, hq_token_only=True)

>>> # Postprocess masks
>>> masks = processor.post_process_masks(
...     outputs.pred_masks, inputs["original_sizes"], inputs["reshaped_input_sizes"]
... )
```

**Parameters:**

config ([SamHQModel](/docs/transformers/main/ko/model_doc/sam_hq#transformers.SamHQModel)) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/main/ko/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights.

**Returns:**

`list[dict[str, torch.Tensor]]`

