# 用于生成的工具

此页面列出了所有由 [generate()](/docs/transformers/v5.3.0/zh/main_classes/text_generation#transformers.GenerationMixin.generate)。

## 生成输出

[generate()](/docs/transformers/v5.3.0/zh/main_classes/text_generation#transformers.GenerationMixin.generate) 的输出是 [ModelOutput](/docs/transformers/v5.3.0/zh/main_classes/output#transformers.utils.ModelOutput) 的一个子类的实例。这个输出是一种包含 [generate()](/docs/transformers/v5.3.0/zh/main_classes/text_generation#transformers.GenerationMixin.generate) 返回的所有信息数据结构，但也可以作为元组或字典使用。
这里是一个例子：

```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel

tokenizer = GPT2Tokenizer.from_pretrained("openai-community/gpt2")
model = GPT2LMHeadModel.from_pretrained("openai-community/gpt2")

inputs = tokenizer("Hello, my dog is cute and ", return_tensors="pt")
generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True)
```

`generation_output` 的对象是 [GenerateDecoderOnlyOutput](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.generation.GenerateDecoderOnlyOutput) 的一个实例，从该类的文档中我们可以看到，这意味着它具有以下属性：

- `sequences`: 生成的tokens序列
- `scores`（可选）: 每个生成步骤的语言建模头的预测分数
- `hidden_states`（可选）: 每个生成步骤模型的hidden states
- `attentions`（可选）: 每个生成步骤模型的注意力权重

在这里，由于我们传递了 `output_scores=True`，我们具有 `scores` 属性。但我们没有 `hidden_states` 和 `attentions`，因为没有传递 `output_hidden_states=True` 或 `output_attentions=True`。

您可以像通常一样访问每个属性，如果该属性未被模型返回，则将获得 `None`。例如，在这里 `generation_output.scores` 是语言建模头的所有生成预测分数，而 `generation_output.attentions` 为 `None`。

当我们将 `generation_output` 对象用作元组时，它只保留非 `None` 值的属性。例如，在这里它有两个元素，`loss` 然后是 `logits`，所以

```python
generation_output[:2]
```

将返回元组`(generation_output.sequences, generation_output.scores)`。

当我们将`generation_output`对象用作字典时，它只保留非`None`的属性。例如，它有两个键，分别是`sequences`和`scores`。

我们在此记录所有输出类型。

### PyTorch[[transformers.generation.GenerateDecoderOnlyOutput]]

#### transformers.generation.GenerateDecoderOnlyOutput[[transformers.generation.GenerateDecoderOnlyOutput]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/utils.py#L147)

Outputs of decoder-only generation models, when using non-beam methods.

**Parameters:**

sequences (`torch.LongTensor` of shape `(batch_size, sequence_length)`) : The generated sequences. The second dimension (sequence_length) is either equal to `max_length` or shorter if all batches finished early due to the `eos_token_id`.

scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True`) : Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), with each tensor of shape `(batch_size, config.vocab_size)`.

logits (`tuple(torch.FloatTensor)` *optional*, returned when `output_logits=True`) : Unprocessed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), with each tensor of shape `(batch_size, config.vocab_size)`.

attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`) : Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size, num_heads, generated_length, sequence_length)`.

hidden_states (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_hidden_states=True`) : Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size, generated_length, hidden_size)`.

past_key_values (`Cache`, *optional*, returned when `use_cache=True`) : Returns the model cache, used to speed up decoding. Different models have a different cache format, check the model's documentation. Usually, a `Cache` instance.

#### transformers.generation.GenerateEncoderDecoderOutput[[transformers.generation.GenerateEncoderDecoderOutput]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/utils.py#L183)

Outputs of encoder-decoder generation models, when using non-beam methods.

**Parameters:**

sequences (`torch.LongTensor` of shape `(batch_size*num_return_sequences, sequence_length)`) : The generated sequences. The second dimension (sequence_length) is either equal to `max_length` or shorter if all batches finished early due to the `eos_token_id`.

scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True`) : Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), with each tensor of shape `(batch_size, config.vocab_size)`.

logits (`tuple(torch.FloatTensor)` *optional*, returned when `output_logits=True`) : Unprocessed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), with each tensor of shape `(batch_size, config.vocab_size)`.

encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True`) : Tuple of `torch.FloatTensor` (one for each layer of the decoder) of shape `(batch_size, num_heads, sequence_length, sequence_length)`.

encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True`) : Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.

decoder_attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`) : Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size, num_heads, generated_length, sequence_length)`.

cross_attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`) : Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size, num_heads, generated_length, sequence_length)`.

decoder_hidden_states (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_hidden_states=True`) : Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size, generated_length, hidden_size)`.

past_key_values (`Cache`, *optional*, returned when `use_cache=True`) : Returns the model cache, used to speed up decoding. Different models have a different cache format, check the model's documentation. Usually, a `Cache` instance.

#### transformers.generation.GenerateBeamDecoderOnlyOutput[[transformers.generation.GenerateBeamDecoderOnlyOutput]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/utils.py#L231)

Outputs of decoder-only generation models, when using beam methods.

**Parameters:**

sequences (`torch.LongTensor` of shape `(batch_size*num_return_sequences, sequence_length)`) : The generated sequences. The second dimension (sequence_length) is either equal to `max_length` or shorter if all batches finished early due to the `eos_token_id`.

sequences_scores (`torch.FloatTensor` of shape `(batch_size*num_return_sequences)`, *optional*, returned when `output_scores=True`) : Final beam scores of the generated `sequences`.

scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True`) : Beam transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), with each tensor of shape `(batch_size*num_beams, config.vocab_size)`.

logits (`tuple(torch.FloatTensor)` *optional*, returned when `output_logits=True`) : Unprocessed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), with each tensor of shape `(batch_size*num_beams, config.vocab_size)`.

beam_indices (`torch.LongTensor`, *optional*, returned when `output_scores=True`) : Beam indices of generated token id at each generation step. `torch.LongTensor` of shape `(batch_size*num_return_sequences, sequence_length)`.

attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`) : Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size*num_beams, num_heads, generated_length, sequence_length)`.

hidden_states (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_hidden_states=True`) : Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size*num_beams*num_return_sequences, generated_length, hidden_size)`.

past_key_values (`Cache`, *optional*, returned when `use_cache=True`) : Returns the model cache, used to speed up decoding. Different models have a different cache format, check the model's documentation. Usually, a `Cache` instance.

#### transformers.generation.GenerateBeamEncoderDecoderOutput[[transformers.generation.GenerateBeamEncoderDecoderOutput]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/utils.py#L275)

Outputs of encoder-decoder generation models, when using beam methods.

**Parameters:**

sequences (`torch.LongTensor` of shape `(batch_size*num_return_sequences, sequence_length)`) : The generated sequences. The second dimension (sequence_length) is either equal to `max_length` or shorter if all batches finished early due to the `eos_token_id`.

sequences_scores (`torch.FloatTensor` of shape `(batch_size*num_return_sequences)`, *optional*, returned when `output_scores=True`) : Final beam scores of the generated `sequences`.

scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True`) : Beam transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), with each tensor of shape `(batch_size*num_beams, config.vocab_size)`.

logits (`tuple(torch.FloatTensor)` *optional*, returned when `output_logits=True`) : Unprocessed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token), with each tensor of shape `(batch_size*num_beams, config.vocab_size)`.

beam_indices (`torch.LongTensor`, *optional*, returned when `output_scores=True`) : Beam indices of generated token id at each generation step. `torch.LongTensor` of shape `(batch_size*num_return_sequences, sequence_length)`.

encoder_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True`) : Tuple of `torch.FloatTensor` (one for each layer of the decoder) of shape `(batch_size, num_heads, sequence_length, sequence_length)`.

encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True`) : Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape `(batch_size*num_beams*num_return_sequences, sequence_length, hidden_size)`.

decoder_attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`) : Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size*num_beams*num_return_sequences, num_heads, generated_length, sequence_length)`.

cross_attentions (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_attentions=True`) : Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size, num_heads, generated_length, sequence_length)`.

decoder_hidden_states (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `output_hidden_states=True`) : Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) of `torch.FloatTensor` of shape `(batch_size*num_beams*num_return_sequences, generated_length, hidden_size)`.

past_key_values (`Cache`, *optional*, returned when `use_cache=True`) : Returns the model cache, used to speed up decoding. Different models have a different cache format, check the model's documentation. Usually, a `Cache` instance.

## LogitsProcessor

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) 可以用于修改语言模型头的预测分数以进行生成

### PyTorch[[transformers.AlternatingCodebooksLogitsProcessor]]

#### transformers.AlternatingCodebooksLogitsProcessor[[transformers.AlternatingCodebooksLogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L2175)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) enforcing alternated generation between the two codebooks of Bark.

This logits processor is exclusively compatible with
[Bark](https://huggingface.co/docs/transformers/en/model_doc/bark)'s fine submodel. See the model documentation
for examples.

__call__transformers.AlternatingCodebooksLogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L2204[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]

**Parameters:**

input_start_len (`int`) : The length of the initial input sequence.

semantic_vocab_size (`int`) : Vocabulary size of the semantic part, i.e number of tokens associated to the semantic vocabulary.

codebook_size (`int`) : Number of tokens associated to the codebook.

#### transformers.ClassifierFreeGuidanceLogitsProcessor[[transformers.ClassifierFreeGuidanceLogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L2111)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) for classifier free guidance (CFG). The scores are split over the batch dimension,
where the first half correspond to the conditional logits (predicted from the input prompt) and the second half
correspond to the unconditional logits (predicted from an empty or 'null' prompt). The processor computes a
weighted average across the conditional and unconditional logits, parameterised by the `guidance_scale`.

See [the paper](https://huggingface.co/papers/2306.05284) for more information.

This logits processor is exclusively compatible with
[MusicGen](https://huggingface.co/docs/transformers/main/en/model_doc/musicgen)

Examples:

```python
>>> from transformers import AutoProcessor, MusicgenForConditionalGeneration

>>> processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
>>> model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")

>>> inputs = processor(
...     text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
...     padding=True,
...     return_tensors="pt",
... )
>>> audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256)
```

__call__transformers.ClassifierFreeGuidanceLogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L2159[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

guidance_scale (float) : The guidance scale for classifier free guidance (CFG). CFG is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer quality.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.EncoderNoRepeatNGramLogitsProcessor[[transformers.EncoderNoRepeatNGramLogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1137)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) that works similarly to [NoRepeatNGramLogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.NoRepeatNGramLogitsProcessor), but applied exclusively to prevent
the repetition of n-grams present in the prompt.

It was designed to promote chattiness in a language model, by preventing the generation of n-grams present in
previous conversation rounds.

Examples:

```py
>>> from transformers import AutoTokenizer, AutoModelForCausalLM

>>> model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz-560m")
>>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m")

>>> inputs = tokenizer("Alice: I love cats. What do you love?\nBob:", return_tensors="pt")

>>> # With greedy decoding, we see Bob repeating Alice's opinion. If Bob was a chatbot, it would be a poor one.
>>> outputs = model.generate(**inputs)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
Alice: I love cats. What do you love?
Bob: I love cats. What do you

>>> # With this logits processor, we can prevent Bob from repeating Alice's opinion.
>>> outputs = model.generate(**inputs, encoder_no_repeat_ngram_size=2)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
Alice: I love cats. What do you love?
Bob: My cats are very cute.
```

__call__transformers.EncoderNoRepeatNGramLogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1186[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

encoder_ngram_size (`int`) : All ngrams of size `ngram_size` can only occur within the encoder input ids.

encoder_input_ids (`int`) : The encoder_input_ids that should not be repeated within the decoder ids.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.EncoderRepetitionPenaltyLogitsProcessor[[transformers.EncoderRepetitionPenaltyLogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L413)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) that works similarly to [RepetitionPenaltyLogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.RepetitionPenaltyLogitsProcessor), but with an *inverse* penalty
that is applied to the tokens present in the prompt. In other words, a penalty above 1.0 increases the odds of
selecting tokens that were present in the prompt.

It was designed to avoid hallucination in input-grounded tasks, like summarization. Although originally intended
for encoder-decoder models, it can also be used with decoder-only models like LLMs.

Examples:

```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m")
>>> model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz-560m")

>>> inputs = tokenizer(["Alice and Bob. The third member's name was"], return_tensors="pt")
>>> gen_out = model.generate(**inputs)
>>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0])
Alice and Bob. The third member's name was not mentioned.

>>> # With the `encoder_repetition_penalty` argument we can trigger this logits processor in `generate`, which can
>>> # promote the use of prompt tokens ("Bob" in this example)
>>> gen_out = model.generate(**inputs, encoder_repetition_penalty=1.2)
>>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0])
Alice and Bob. The third member's name was Bob. The third member's name was Bob.
```

__call__transformers.EncoderRepetitionPenaltyLogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L457[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

penalty (`float`) : The parameter for repetition penalty. 1.0 means no penalty. Above 1.0 rewards prompt tokens. Between 0.0 and 1.0 penalizes prompt tokens.

encoder_input_ids (`torch.LongTensor`) : The encoder_input_ids that should be repeated within the decoder ids.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.EpsilonLogitsWarper[[transformers.EpsilonLogitsWarper]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L858)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) that performs epsilon-sampling, i.e. restricting to tokens with `prob >= epsilon`. Takes the
largest min_tokens_to_keep tokens if no tokens satisfy this constraint. See [Truncation Sampling as Language Model
Desmoothing](https://huggingface.co/papers/2210.15191) for more information.

Examples:
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed

>>> set_seed(1)
>>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2")

>>> inputs = tokenizer("A sequence: 1, 2", return_tensors="pt")

>>> # With sampling, the output is unexpected -- sometimes too unexpected.
>>> outputs = model.generate(**inputs, do_sample=True)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
A sequence: 1, 2, 3 | 

>>> # With epsilon sampling, the output gets restricted to high-probability tokens. Note that this is similar to
>>> # Top P sampling, which restricts tokens based on their cumulative probability.
>>> # Pro tip: The paper recommends using `epsilon_cutoff` values between 3e-4 and 9e-4
>>> outputs = model.generate(**inputs, do_sample=True, epsilon_cutoff=0.1)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
A sequence: 1, 2, 3, 4, 5, 6, 7, 8, 9
```

__call__transformers.EpsilonLogitsWarper.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L913[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

epsilon (`float`) : If set to > 0, only the most tokens with probabilities `epsilon` or higher are kept for generation.

filter_value (`float`, *optional*, defaults to -inf) : All filtered values will be set to this float value.

min_tokens_to_keep (`int`, *optional*, defaults to 1) : Minimum number of tokens that cannot be filtered.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.EtaLogitsWarper[[transformers.EtaLogitsWarper]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L927)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) that performs eta-sampling, a technique to filter out tokens with probabilities below a dynamic
cutoff value, `eta`, which is calculated based on a combination of the hyperparameter `epsilon` and the entropy of
the token probabilities, i.e. `eta := min(epsilon, sqrt(epsilon * e^-entropy(probabilities)))`. Takes the largest
min_tokens_to_keep tokens if no tokens satisfy this constraint. It addresses the issue of poor quality in long
samples of text generated by neural language models leading to more coherent and fluent text. See [Truncation
Sampling as Language Model Desmoothing](https://huggingface.co/papers/2210.15191) for more information. Note: `do_sample`
must be set to `True` for this `LogitsProcessor` to work.

Examples:
```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed

>>> set_seed(1)
>>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2")

>>> inputs = tokenizer("A sequence: 1, 2", return_tensors="pt")

>>> # With sampling, the output is unexpected -- sometimes too unexpected.
>>> outputs = model.generate(**inputs, do_sample=True)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
A sequence: 1, 2, 3 | 

>>> # With eta sampling, the output gets restricted to high-probability tokens. You can see it as a dynamic form of
>>> # epsilon sampling that adapts its cutoff probability based on the entropy (high entropy = lower cutoff).
>>> # Pro tip: The paper recommends using `eta_cutoff` values between 3e-4 to 4e-3
>>> outputs = model.generate(**inputs, do_sample=True, eta_cutoff=0.1)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
A sequence: 1, 2, 3, 4, 5, 6, 7, 8, 9
```

__call__transformers.EtaLogitsWarper.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L996[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

epsilon (`float`) : A float value in the range (0, 1). Hyperparameter used to calculate the dynamic cutoff value, `eta`. The suggested values from the paper ranges from 3e-4 to 4e-3 depending on the size of the model.

filter_value (`float`, *optional*, defaults to -inf) : All values that are found to be below the dynamic cutoff value, `eta`, are set to this float value. This parameter is useful when logits need to be modified for very low probability tokens that should be excluded from generation entirely.

min_tokens_to_keep (`int`, *optional*, defaults to 1) : Specifies the minimum number of tokens that must be kept for generation, regardless of their probabilities. For example, if `min_tokens_to_keep` is set to 1, at least one token will always be kept for generation, even if all tokens have probabilities below the cutoff `eta`.

device (`str`, *optional*, defaults to `"cpu"`) : The device to allocate the tensors.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.ExponentialDecayLengthPenalty[[transformers.ExponentialDecayLengthPenalty]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1672)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) that exponentially increases the score of the `eos_token_id` after `start_index` has been
reached. This allows generating shorter sequences without having a hard cutoff, allowing the `eos_token` to be
predicted in a meaningful position.

Examples:

```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed

>>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2")

>>> text = "Just wanted to let you know, I"
>>> inputs = tokenizer(text, return_tensors="pt")

>>> # Let's consider that we want short sentences, so we limit `max_length=30`. However, we observe that the answer
>>> # tends to end abruptly.
>>> set_seed(1)
>>> outputs = model.generate(**inputs, do_sample=True, temperature=0.9, max_length=30, pad_token_id=50256)
>>> print(tokenizer.batch_decode(outputs)[0])
Just wanted to let you know, I received a link to an ebook, the book How To Start A Social Network which was
published in 2010. Although

>>> # To promote the appearance of the EOS token at the right time, we add the `exponential_decay_length_penalty =
>>> # (start_index, decay_factor)`. Instead of cutting at max_tokens, the output comes to an end before and usually
>>> # with more meaning. What happens is that starting from `start_index` the EOS token score will be increased
>>> # by `decay_factor` exponentially. However, if you set a high decay factor, you may also end up with abruptly
>>> # ending sequences.
>>> set_seed(1)
>>> outputs = model.generate(
...     **inputs,
...     do_sample=True,
...     temperature=0.9,
...     max_length=30,
...     pad_token_id=50256,
...     exponential_decay_length_penalty=(15, 1.6),
... )
>>> print(tokenizer.batch_decode(outputs)[0])
Just wanted to let you know, I received a link to an ebook, the book How To Start A Social Network
which

>>> # With a small decay factor, you will have a higher chance of getting a meaningful sequence.
>>> set_seed(1)
>>> outputs = model.generate(
...     **inputs,
...     do_sample=True,
...     temperature=0.9,
...     max_length=30,
...     pad_token_id=50256,
...     exponential_decay_length_penalty=(15, 1.01),
... )
>>> print(tokenizer.batch_decode(outputs)[0])
Just wanted to let you know, I received a link to an ebook, the book How To Start A Social Network which was
published in 2010.
```

__call__transformers.ExponentialDecayLengthPenalty.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1758[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

exponential_decay_length_penalty (`tuple(int, float)`) : This tuple shall consist of: `(start_index, decay_factor)` where `start_index` indicates where penalty starts and `decay_factor` represents the factor of exponential decay

eos_token_id (`Union[int, list[int], torch.Tensor]`) : The id(s) of the *end-of-sequence* token.

input_ids_seq_length (`int`) : The length of the input sequence.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.ForcedBOSTokenLogitsProcessor[[transformers.ForcedBOSTokenLogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1550)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) that enforces the specified token as the first generated token. Used with encoder-decoder
models.

Examples:

```python
>>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

>>> model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small")
>>> tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-small")

>>> inputs = tokenizer("Translate from English to German: I love cats.", return_tensors="pt")

>>> # By default, it continues generating according to the model's logits
>>> outputs = model.generate(**inputs, max_new_tokens=10)
>>> print(tokenizer.batch_decode(outputs)[0])
 Ich liebe Kitty.

>>> # We can use `forced_bos_token_id` to force the start of generation with an encoder-decoder model
>>> # (including forcing it to end straight away with an EOS token)
>>> outputs = model.generate(**inputs, max_new_tokens=10, forced_bos_token_id=tokenizer.eos_token_id)
>>> print(tokenizer.batch_decode(outputs)[0])

```

__call__transformers.ForcedBOSTokenLogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1585[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

bos_token_id (`int`) : The id of the token to force as the first generated token.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.ForcedEOSTokenLogitsProcessor[[transformers.ForcedEOSTokenLogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1595)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) that enforces the specified token as the last generated token when `max_length` is reached.

Examples:

```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM

>>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2")

>>> inputs = tokenizer("A sequence: 1, 2, 3", return_tensors="pt")

>>> # By default, it continues generating according to the model's logits
>>> outputs = model.generate(**inputs, max_new_tokens=10)
>>> print(tokenizer.batch_decode(outputs)[0])
A sequence: 1, 2, 3, 4, 5, 6, 7, 8

>>> # `forced_eos_token_id` ensures the generation ends with a EOS token
>>> outputs = model.generate(**inputs, max_new_tokens=10, forced_eos_token_id=tokenizer.eos_token_id)
>>> print(tokenizer.batch_decode(outputs)[0])
A sequence: 1, 2, 3, 4, 5, 6, 7,
```

__call__transformers.ForcedEOSTokenLogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1641[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

max_length (`int`) : The maximum length of the sequence to be generated.

eos_token_id (`Union[int, list[int], torch.Tensor]`) : The id(s) of the *end-of-sequence* token.

device (`str`, *optional*, defaults to `"cpu"`) : The device to allocate the tensors.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.InfNanRemoveLogitsProcessor[[transformers.InfNanRemoveLogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1651)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) that removes all `nan` and `inf` values to avoid the generation method to fail. Note that using
the logits processor should only be used if necessary since it can slow down the generation method.

This logits processor has no `generate` example, as there shouldn't be a correct combination of flags that warrants
its use.

__call__transformers.InfNanRemoveLogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1660[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`) : Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)

scores (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) : Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.LogitNormalization[[transformers.LogitNormalization]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1773)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) for normalizing the scores using log-softmax. It's important to normalize
the scores during beam search, after applying the logits processors or warpers, since the search algorithm used in
this library doesn't do it (it only does it before, but they may need re-normalization) but it still supposes that
the scores are normalized when comparing the hypotheses.

Examples:

```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> import torch

>>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2")

>>> inputs = tokenizer("A sequence: 1, 2, 3", return_tensors="pt")

>>> # By default, the scores are not normalized -- the sum of their exponentials is NOT a normalized probability
>>> # distribution, summing to 1
>>> outputs = model.generate(**inputs, return_dict_in_generate=True, output_scores=True)
>>> print(torch.allclose(torch.sum(torch.exp(outputs.scores[-1])), torch.Tensor((1.000,)), rtol=1e-4))
False

>>> # Normalizing them may have a positive impact on beam methods, or when using the scores on your application
>>> outputs = model.generate(**inputs, renormalize_logits=True, return_dict_in_generate=True, output_scores=True)
>>> print(torch.allclose(torch.sum(torch.exp(outputs.scores[-1])), torch.Tensor((1.000,)), rtol=1e-4))
True
```

__call__transformers.LogitNormalization.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1804[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`) : Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)

scores (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) : Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.LogitsProcessor[[transformers.LogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L48)

Abstract base class for all logit processors that can be applied during generation.

__call__transformers.LogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L51[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`) : Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)

scores (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) : Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.LogitsProcessorList[[transformers.LogitsProcessorList]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L58)

This class can be used to create a list of [LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) to subsequently process a `scores` input tensor.
This class inherits from list and adds a specific *__call__* method to apply each [LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) to the
inputs.

__call__transformers.LogitsProcessorList.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L65[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}, {"name": "**kwargs", "val": ""}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using
  beam search or log softmax for each vocabulary token when using beam search
- **kwargs** (`dict[str, Any]`, *optional*) --
  Additional kwargs that are specific to a logits processor.0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`) : Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)

scores (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) : Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search

kwargs (`dict[str, Any]`, *optional*) : Additional kwargs that are specific to a logits processor.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.MinLengthLogitsProcessor[[transformers.MinLengthLogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L102)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) enforcing a min-length by setting EOS probability to 0. Note that, for decoder-only models
like most LLMs, the length includes the prompt.

Examples:

```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m")
>>> model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz-560m")

>>> inputs = tokenizer("A number:", return_tensors="pt")
>>> gen_out = model.generate(**inputs)
>>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0])
A number: one

>>> # setting `min_length` to a value smaller than the uncontrolled output length has no impact
>>> gen_out = model.generate(**inputs, min_length=3)
>>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0])
A number: one

>>> # setting a larger `min_length` will force the model to generate beyond its natural ending point, which is not
>>> # necessarily incorrect
>>> gen_out = model.generate(**inputs, min_length=10)
>>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0])
A number: one thousand, nine hundred and ninety-four
```

__call__transformers.MinLengthLogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L153[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

min_length (`int`) : The minimum length below which the score of `eos_token_id` is set to `-float("Inf")`.

eos_token_id (`Union[int, list[int], torch.Tensor]`) : The id(s) of the *end-of-sequence* token.

device (`str`, *optional*, defaults to `"cpu"`) : The device to allocate the tensors.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.MinNewTokensLengthLogitsProcessor[[transformers.MinNewTokensLengthLogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L163)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) enforcing a min-length of new tokens by setting EOS (End-Of-Sequence) token probability to 0.
Contrarily to [MinLengthLogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.MinLengthLogitsProcessor), this processor ignores the prompt.

Examples:

```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m")
>>> model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz-560m")

>>> inputs = tokenizer(["A number:"], return_tensors="pt")
>>> gen_out = model.generate(**inputs)
>>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0])
A number: one

>>> # setting `min_new_tokens` will force the model to generate beyond its natural ending point, which is not
>>> # necessarily incorrect
>>> gen_out = model.generate(**inputs, min_new_tokens=2)
>>> print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)[0])
A number: one thousand
```

__call__transformers.MinNewTokensLengthLogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L223[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

prompt_length_to_skip (`int`) : The input tokens length. Not a valid argument when used with `generate` as it will automatically assign the input length.

min_new_tokens (`int`) : The minimum *new* tokens length below which the score of `eos_token_id` is set to `-float("Inf")`.

eos_token_id (`Union[int, list[int], torch.Tensor]`) : The id(s) of the *end-of-sequence* token.

device (`str`, *optional*, defaults to `"cpu"`) : The device to allocate the tensors.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.NoBadWordsLogitsProcessor[[transformers.NoBadWordsLogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1389)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) that enforces that specified sequences will never be selected.

In order to get the token ids of the words that should not appear in the generated text, make sure to set
`add_prefix_space=True` when initializing the tokenizer, and use `tokenizer(bad_words,
add_special_tokens=False).input_ids`. The `add_prefix_space` argument is only supported for some slow tokenizers,
as fast tokenizers' prefixing behaviours come from `pre tokenizers`. Read more
[here](https://huggingface.co/docs/tokenizers/api/pre-tokenizers).

Examples:

```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM

>>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2")
>>> inputs = tokenizer(["In a word, the cake is a"], return_tensors="pt")

>>> output_ids = model.generate(inputs["input_ids"], max_new_tokens=5, pad_token_id=tokenizer.eos_token_id)
>>> print(tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0])
In a word, the cake is a bit of a mess.

>>> # Now let's take the bad words out. Please note that the tokenizer is initialized differently
>>> tokenizer_with_prefix_space = AutoTokenizer.from_pretrained("openai-community/gpt2", add_prefix_space=True)

>>> def get_tokens_as_list(word_list):
...     "Converts a sequence of words into a list of tokens"
...     tokens_list = []
...     for word in word_list:
...         tokenized_word = tokenizer_with_prefix_space([word], add_special_tokens=False).input_ids[0]
...         tokens_list.append(tokenized_word)
...     return tokens_list

>>> bad_words_ids = get_tokens_as_list(word_list=["mess"])
>>> output_ids = model.generate(
...     inputs["input_ids"], max_new_tokens=5, bad_words_ids=bad_words_ids, pad_token_id=tokenizer.eos_token_id
... )
>>> print(tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0])
In a word, the cake is a bit of a surprise.
```

__call__transformers.NoBadWordsLogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1281[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

bad_words_ids (`list[list[int]]`) : List of list of token ids that are not allowed to be generated.

eos_token_id (`Union[int, list[int], torch.Tensor]`, *optional*) : The id(s) of the *end-of-sequence* token.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.NoRepeatNGramLogitsProcessor[[transformers.NoRepeatNGramLogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1078)

N-grams are groups of "n" consecutive words, characters, or tokens taken from a sequence of text. Given the
sentence: "She runs fast", the bi-grams (n=2) would be ("she", "runs") and ("runs", "fast"). In text generation,
avoiding repetitions of word sequences provides a more diverse output. This [LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) enforces no
repetition of n-grams by setting the scores of banned tokens to negative infinity which eliminates those tokens
from consideration when further processing the scores. Note that, for decoder-only models like most LLMs, the
prompt is also considered to obtain the n-grams.
[Fairseq](https://github.com/pytorch/fairseq/blob/a07cb6f40480928c9e0548b737aadd36ee66ac76/fairseq/sequence_generator.py#L345).

Use n-gram penalties with care. For instance, penalizing 2-grams (bigrams) in an article about the city of New York
might lead to undesirable outcomes where the city's name appears only once in the entire text.
[Reference](https://huggingface.co/blog/how-to-generate)

Examples:

```py
>>> from transformers import AutoTokenizer, AutoModelForCausalLM

>>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2")
>>> inputs = tokenizer(["Today I"], return_tensors="pt")

>>> output = model.generate(**inputs)
>>> print(tokenizer.decode(output[0], skip_special_tokens=True))
Today I'm not sure if I'm going to be able to do it.

>>> # Now let's add ngram size using `no_repeat_ngram_size`. This stops the repetitions ("I'm") in the output.
>>> output = model.generate(**inputs, no_repeat_ngram_size=2)
>>> print(tokenizer.decode(output[0], skip_special_tokens=True))
Today I'm not sure if I can get a better understanding of the nature of this issue
```

__call__transformers.NoRepeatNGramLogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1125[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

ngram_size (`int`) : All ngrams of size `ngram_size` can only occur once.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.PrefixConstrainedLogitsProcessor[[transformers.PrefixConstrainedLogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1478)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) that enforces constrained generation and is useful for prefix-conditioned constrained
generation. See [Autoregressive Entity Retrieval](https://huggingface.co/papers/2010.00904) for more information.

Examples:

```py
>>> from transformers import AutoTokenizer, AutoModelForCausalLM

>>> model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz-560m")
>>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m")

>>> inputs = tokenizer("Alice and Bob", return_tensors="pt")

>>> # By default, it continues generating according to the model's logits
>>> outputs = model.generate(**inputs, max_new_tokens=5)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
Alice and Bob are friends

>>> # We can constrain it with `prefix_allowed_tokens_fn` to force a certain behavior based on a prefix.
>>> # For instance, we can force an entire entity to be generated when its beginning is detected.
>>> entity = tokenizer(" Bob Marley", return_tensors="pt").input_ids[0]  # 3 tokens
>>> def prefix_allowed_tokens_fn(batch_id, input_ids):
...     '''
...     Attempts to generate 'Bob Marley' when 'Bob' is detected.
...     In this case, `batch_id` is not used, but you can set rules for each batch member.
...     '''
...     if input_ids[-1] == entity[0]:
...         return [entity[1].item()]
...     elif input_ids[-2] == entity[0] and input_ids[-1] == entity[1]:
...         return [entity[2].item()]
...     return list(range(tokenizer.vocab_size))  # If no match, allow all tokens

>>> outputs = model.generate(**inputs, max_new_tokens=5, prefix_allowed_tokens_fn=prefix_allowed_tokens_fn)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
Alice and Bob Marley
```

__call__transformers.PrefixConstrainedLogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1529[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

prefix_allowed_tokens_fn (`Callable[[int, torch.Tensor], list[int]]`) : This function constraints the beam search to allowed tokens only at each step. This function takes 2 arguments `inputs_ids` and the batch ID `batch_id`. It has to return a list with the allowed tokens for the next generation step conditioned on the previously generated tokens `inputs_ids` and the batch ID `batch_id`.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.RepetitionPenaltyLogitsProcessor[[transformers.RepetitionPenaltyLogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L301)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) that prevents the repetition of previous tokens through a penalty. This penalty is applied at
most once per token. Note that, for decoder-only models like most LLMs, the considered tokens include the prompt
by default.

In the original [paper](https://huggingface.co/papers/1909.05858), the authors suggest the use of a penalty of around
1.2 to achieve a good balance between truthful generation and lack of repetition. To penalize and reduce
repetition, use `penalty` values above 1.0, where a higher value penalizes more strongly. To reward and encourage
repetition, use `penalty` values between 0.0 and 1.0, where a lower value rewards more strongly.

Examples:

```py
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, RepetitionPenaltyLogitsProcessor

>>> # Initializing the model and tokenizer for it
>>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2")
>>> inputs = tokenizer(["I'm not going to"], return_tensors="pt")

>>> # This shows a normal generate without any specific parameters
>>> summary_ids = model.generate(**inputs)
>>> print(tokenizer.batch_decode(summary_ids, skip_special_tokens=True)[0])
I'm not going to be able to do that. I'm going to be able to do that

>>> # This generates a penalty for repeated tokens
>>> penalized_ids = model.generate(**inputs, repetition_penalty=1.1)
>>> print(tokenizer.batch_decode(penalized_ids, skip_special_tokens=True)[0])
I'm not going to be able to do that. I'll just have to go out and play

>>> # We can also exclude the input prompt by creating an instance of this class
>>> # with a `prompt_ignore_length` and passing it as a custom logit processor
>>> rep_pen_processor = RepetitionPenaltyLogitsProcessor(
...     penalty=1.1,
...     prompt_ignore_length=inputs["input_ids"].shape[-1]
... )
>>> penalized_ids = model.generate(**inputs, logits_processor=[rep_pen_processor])
>>> print(tokenizer.batch_decode(penalized_ids, skip_special_tokens=True)[0])
I'm not going to be able to do that. I'm going to have to go through a lot of things, and
```

__call__transformers.RepetitionPenaltyLogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L369[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

penalty (`float`) : The parameter for repetition penalty. 1.0 means no penalty. Above 1.0 penalizes previously generated tokens. Between 0.0 and 1.0 rewards previously generated tokens.

prompt_ignore_length (`int`, *optional*) : The original input ids sequence length, which if provided, will not be used in the penalty calculation.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.SequenceBiasLogitsProcessor[[transformers.SequenceBiasLogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1206)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) that applies an additive bias on sequences. The bias is applied to the last token of a sequence
when the next generated token can complete it. Consequently, to take the most of biasing sequences with more than
one token, consider using beam methods (to gracefully work around partially completed sequences that have a
negative bias) and applying the bias to their prefixes (to ensure the bias is applied earlier).

At a token-level, biasing a word is different from biasing a word with a space before it. If you want to bias
"foo" mid-sentence, you'll likely want to add a prefix space and bias " foo" instead. Check the tokenizer section
of our NLP course to find out why: https://huggingface.co/learn/nlp-course/chapter2/4?fw=pt

Examples:

```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM

>>> model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct")
>>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct")
>>> inputs = tokenizer(["The full name of Donald is Donald"], return_tensors="pt")

>>> summary_ids = model.generate(inputs["input_ids"], max_new_tokens=4, do_sample=False)
>>> print(tokenizer.batch_decode(summary_ids, skip_special_tokens=True)[0])
The full name of Donald is Donald John Trump Sr.

>>> def get_tokens(word):
...     return tokenizer([word], add_special_tokens=False).input_ids[0]

>>> # IMPORTANT: Remember our tip about adding spaces before words to bias them correctly.
>>> sequence_bias = [[get_tokens("Trump"), -10.0],]  # will fail to apply bias
>>> biased_ids = model.generate(
...     inputs["input_ids"], max_new_tokens=4, do_sample=False, sequence_bias=sequence_bias
... )
>>> print(tokenizer.batch_decode(biased_ids, skip_special_tokens=True)[0])
The full name of Donald is Donald John Trump Sr.

>>> sequence_bias = [[get_tokens(" Trump"), -10.0],]  # will work
>>> biased_ids = model.generate(
...     inputs["input_ids"], max_new_tokens=4, do_sample=False, sequence_bias=sequence_bias
... )
>>> print(tokenizer.batch_decode(biased_ids, skip_special_tokens=True)[0])
The full name of Donald is Donald John Harper. He

>>> # We can also add a positive bias to nudge the model towards specific tokens or continuations. This technique
>>> # is also more effective when paired up with beam search.
>>> sequence_bias = [[get_tokens(" Donald Duck"), 10.0],]
>>> biased_ids = model.generate(
...     inputs["input_ids"], max_new_tokens=4, num_beams=4, do_sample=False, sequence_bias=sequence_bias
... )
>>> print(tokenizer.batch_decode(biased_ids, skip_special_tokens=True)[0])
The full name of Donald is Donald Duck. He is
```

__call__transformers.SequenceBiasLogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1281[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

sequence_bias (`list[list[Union[list[int], float]]]`) : List of lists that maps a sequence of tokens to its bias term (e.g. `[[[10, 45], -2.0], [[64], -7.5]]`). Positive biases increase the odds of the sequence being selected, while negative biases do the opposite. If a sequence has a length of 1, its bias will always be applied. Otherwise, the bias will only be applied if the sequence in question is about to be completed (in the token selection step after this processor is applied).

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.SuppressTokensAtBeginLogitsProcessor[[transformers.SuppressTokensAtBeginLogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1810)

[SuppressTokensAtBeginLogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.SuppressTokensAtBeginLogitsProcessor) suppresses a list of tokens as soon as the `generate` function starts
generating using `begin_index` tokens. This should ensure that the tokens defined by `begin_suppress_tokens` are
not generated at the beginning. Originally created for
[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper).

Examples:

```python
>>> from transformers import AutoProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset

>>> processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> inputs = processor(ds[0]["audio"]["array"], return_tensors="pt")

>>> # Whisper has `begin_suppress_tokens` set by default (= `[220, 50256]`). 50256 is the EOS token, so this means
>>> # it can't generate and EOS token in the first iteration, but it can in the others.
>>> outputs = model.generate(**inputs, return_dict_in_generate=True, output_scores=True)
>>> print(outputs.scores[0][0, 50256])
tensor(-inf)
>>> print(outputs.scores[-1][0, 50256])  # in other places we can see some probability mass for EOS
tensor(29.9010)

>>> # If we disable `begin_suppress_tokens`, we can generate EOS in the first iteration.
>>> outputs = model.generate(
...     **inputs, return_dict_in_generate=True, output_scores=True, begin_suppress_tokens=None
... )
>>> print(outputs.scores[0][0, 50256])
tensor(11.2027)
```

__call__transformers.SuppressTokensAtBeginLogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1852[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`) : Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)

scores (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) : Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.SuppressTokensLogitsProcessor[[transformers.SuppressTokensLogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1863)

This processor can be used to suppress a list of tokens. The processor will set their log probs to `-inf` so
that they are not generated. Originally created for
[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper).

Examples:

```python
>>> from transformers import AutoProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset

>>> processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> inputs = processor(ds[0]["audio"]["array"], return_tensors="pt")

>>> # Whisper has a long list of suppressed tokens. For instance, in this case, the token 1 is suppressed by default.
>>> outputs = model.generate(**inputs, return_dict_in_generate=True, output_scores=True)
>>> print(outputs.scores[1][0, 1])  # 1 (and not 0) is the first freely generated token
tensor(-inf)

>>> # If we disable `suppress_tokens`, we can generate it.
>>> outputs = model.generate(**inputs, return_dict_in_generate=True, output_scores=True, suppress_tokens=None)
>>> print(outputs.scores[1][0, 1])
tensor(6.0678)
```

__call__transformers.SuppressTokensLogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1895[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`) : Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)

scores (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) : Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam search or log softmax for each vocabulary token when using beam search

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.TemperatureLogitsWarper[[transformers.TemperatureLogitsWarper]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L235)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) for temperature (exponential scaling output probability distribution), which effectively means
that it can control the randomness of the predicted tokens. Often used together with [TopPLogitsWarper](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.TopPLogitsWarper) and
[TopKLogitsWarper](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.TopKLogitsWarper).

Make sure that `do_sample=True` is included in the `generate` arguments otherwise the temperature value won't have
any effect.

Examples:

```python
>>> import torch
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed

>>> set_seed(0)  # for reproducibility

>>> tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2")
>>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
>>> model.config.pad_token_id = model.config.eos_token_id
>>> inputs = tokenizer(["Hugging Face Company is"], return_tensors="pt")

>>> # With temperature=1.0, the default, we consistently get random outputs due to random sampling.
>>> generate_kwargs = {"max_new_tokens": 10, "do_sample": True, "temperature": 1.0, "num_return_sequences": 2}
>>> outputs = model.generate(**inputs, **generate_kwargs)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['Hugging Face Company is one of these companies that is going to take a',
"Hugging Face Company is a brand created by Brian A. O'Neil"]

>>> # However, with temperature close to 0, it approximates greedy decoding strategies (invariant)
>>> generate_kwargs["temperature"] = 0.0001
>>> outputs = model.generate(**inputs, **generate_kwargs)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['Hugging Face Company is a company that has been around for over 20 years',
'Hugging Face Company is a company that has been around for over 20 years']
```

__call__transformers.TemperatureLogitsWarper.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L295[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

temperature (`float`) : Strictly positive float value used to modulate the logits distribution. A value smaller than `1` decreases randomness (and vice versa), with `0` being equivalent to shifting all probability mass to the most likely token.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.TopKLogitsWarper[[transformers.TopKLogitsWarper]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L535)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) that performs top-k, i.e. restricting to the k highest probability elements. Often used
together with [TemperatureLogitsWarper](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.TemperatureLogitsWarper) and [TopPLogitsWarper](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.TopPLogitsWarper).

Examples:

```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed

>>> set_seed(1)
>>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2")

>>> inputs = tokenizer("A sequence: A, B, C, D", return_tensors="pt")

>>> # With sampling, the output is unexpected -- sometimes too unexpected.
>>> outputs = model.generate(**inputs, do_sample=True)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
A sequence: A, B, C, D, E — S — O, P — R

>>> # With `top_k` sampling, the output gets restricted the k most likely tokens.
>>> # Pro tip: In practice, LLMs use `top_k` in the 5-50 range.
>>> outputs = model.generate(**inputs, do_sample=True, top_k=2)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
A sequence: A, B, C, D, E, F, G, H, I
```

__call__transformers.TopKLogitsWarper.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L579[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

top_k (`int`) : The number of highest probability vocabulary tokens to keep for top-k-filtering.

filter_value (`float`, *optional*, defaults to -inf) : All filtered values will be set to this float value.

min_tokens_to_keep (`int`, *optional*, defaults to 1) : Minimum number of tokens that cannot be filtered.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.TopPLogitsWarper[[transformers.TopPLogitsWarper]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L468)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) that performs top-p, i.e. restricting to top tokens summing to prob_cut_off 

Examples:

```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed

>>> set_seed(1)
>>> model = AutoModelForCausalLM.from_pretrained("distilbert/distilgpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilgpt2")

>>> inputs = tokenizer("A sequence: 1, 2", return_tensors="pt")

>>> # With sampling, the output is unexpected -- sometimes too unexpected.
>>> outputs = model.generate(**inputs, do_sample=True)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
A sequence: 1, 2, 3 | 

>>> # With `top_p` sampling, the output gets restricted to high-probability tokens.
>>> # Pro tip: In practice, LLMs use `top_p` in the 0.9-0.95 range.
>>> outputs = model.generate(**inputs, do_sample=True, top_p=0.1)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
A sequence: 1, 2, 3, 4, 5, 6, 7, 8, 9
```

__call__transformers.TopPLogitsWarper.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L519[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

top_p (`float`) : If set to >> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed

>>> model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz-560m")
>>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m")

>>> inputs = tokenizer("1, 2, 3", return_tensors="pt")

>>> # We can see that greedy decoding produces a sequence of numbers
>>> outputs = model.generate(**inputs)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
1, 2, 3, 4, 5, 6, 7, 8, 9, 10,

>>> # For this particular seed, we can see that sampling produces nearly the same low-information (= low entropy)
>>> # sequence
>>> set_seed(18)
>>> outputs = model.generate(**inputs, do_sample=True)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])
1, 2, 3, 4, 5, 6, 7, 8, 9 and 10

>>> # With `typical_p` set, the most obvious sequence is no longer produced, which may be good for your problem
>>> set_seed(18)
>>> outputs = model.generate(
...     **inputs, do_sample=True, typical_p=0.1, return_dict_in_generate=True, output_scores=True
... )
>>> print(tokenizer.batch_decode(outputs.sequences, skip_special_tokens=True)[0])
1, 2, 3 and 5

>>> # We can see that the token corresponding to "4" (token 934) in the second position, the most likely token
>>> # as seen with greedy decoding, was entirely blocked out
>>> print(outputs.scores[1][0, 934])
tensor(-inf)
```

__call__transformers.TypicalLogitsWarper.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L834[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

mass (`float`, *optional*, defaults to 0.9) : Value of typical_p between 0 and 1 inclusive, defaults to 0.9.

filter_value (`float`, *optional*, defaults to -inf) : All filtered values will be set to this float value.

min_tokens_to_keep (`int`, *optional*, defaults to 1) : Minimum number of tokens that cannot be filtered.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

#### transformers.UnbatchedClassifierFreeGuidanceLogitsProcessor[[transformers.UnbatchedClassifierFreeGuidanceLogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L2220)

Logits processor for Classifier-Free Guidance (CFG). The processors computes a weighted average across scores
from prompt conditional and prompt unconditional (or negative) logits, parameterized by the `guidance_scale`.
The unconditional scores are computed internally by prompting `model` with the `unconditional_ids` branch.

See [the paper](https://huggingface.co/papers/2306.17806) for more information.

Examples:

```python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM

>>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
>>> tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2")
>>> inputs = tokenizer(["Today, a dragon flew over Paris, France,"], return_tensors="pt")
>>> out = model.generate(inputs["input_ids"], guidance_scale=1.5)
>>> tokenizer.batch_decode(out, skip_special_tokens=True)[0]
'Today, a dragon flew over Paris, France, killing at least 50 people and injuring more than 100'

>>> # with a negative prompt
>>> neg_inputs = tokenizer(["A very happy event happened,"], return_tensors="pt")
>>> out = model.generate(inputs["input_ids"], guidance_scale=2, negative_prompt_ids=neg_inputs["input_ids"])
>>> tokenizer.batch_decode(out, skip_special_tokens=True)[0]
'Today, a dragon flew over Paris, France, killing at least 130 people. French media reported that'

>>> # with a positive prompt
>>> neg_inputs = tokenizer(["A very happy event happened,"], return_tensors="pt")
>>> out = model.generate(inputs["input_ids"], guidance_scale=0, negative_prompt_ids=neg_inputs["input_ids"])
>>> tokenizer.batch_decode(out, skip_special_tokens=True)[0]
"Today, a dragon flew over Paris, France, and I'm very happy to be here. I"
```

__call__transformers.UnbatchedClassifierFreeGuidanceLogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L2326[{"name": "input_ids", "val": ""}, {"name": "scores", "val": ""}]

**Parameters:**

guidance_scale (`float`) : The guidance scale for classifier free guidance (CFG). CFG is enabled by setting `guidance_scale != 1`. Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer quality. A value smaller than 1 has the opposite effect, while making the negative prompt provided with negative_prompt_ids (if any) act as a positive prompt.

model (`PreTrainedModel`) : The model computing the unconditional scores. Supposedly the same as the one computing the conditional scores. Both models must use the same tokenizer.

unconditional_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) : Indices of input sequence tokens in the vocabulary for the unconditional branch. If unset, will default to the last token of the prompt.

unconditional_attention_mask (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) : Attention mask for unconditional_ids.

use_cache (`bool`, *optional*, defaults to `True`) : Whether to cache key/values during the negative prompt forward pass.

#### transformers.WhisperTimeStampLogitsProcessor[[transformers.WhisperTimeStampLogitsProcessor]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1903)

[LogitsProcessor](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.LogitsProcessor) that modifies the logits for the generation of timestamps in the transcription. When the input
tokens are at a specific threshold, the processor sets the scores to negative infinity. The processor makes sure
that timestamp tokens appear in pairs, by masking out the logits that would break this pairing pattern. This is
done to maintain the consistency and structure of generated timestamps. It also ensures that when the predicted
probability of sampling any of the timestamp token is greater than any individual non-timestamp token, those
non-timestamp logits are set to negative infinity. This is done to ensure the generation of timestamps over other
potential tokens.

See [the paper](https://huggingface.co/papers/2212.04356) for more information.

Examples:
``` python
>>> import torch
>>> from transformers import AutoProcessor, WhisperForConditionalGeneration, GenerationConfig
>>> from datasets import load_dataset

>>> processor = AutoProcessor.from_pretrained("openai/whisper-tiny.en")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en")
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> inputs = processor(ds[3]["audio"]["array"], return_tensors="pt")
>>> input_features = inputs.input_features

>>> #Displaying timestamps
>>> generated_ids = model.generate(inputs=input_features, return_timestamps=True)
>>> transcription = processor.batch_decode(generated_ids, decode_with_timestamps=True)[0]
>>> print("Transcription:", transcription)
Transcription:  He has grave doubts whether Sir Frederick Layton's work is really Greek after all, and can discover in it but little of rocky Ithaca.

>>> #No timestamps & change EOS:
>>> #This allows the user to select a specific token to terminate the sequence on, in this case it's the word "can"(460)
>>> model.generation_config.eos_token_id = 460
>>> generated_ids = model.generate(inputs=input_features,return_timestamps=False)
>>> transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
>>> print("Transcription:", transcription)
Transcription:  He has grave doubts whether Sir Frederick Layton's work is really Greek after all and can
```

__call__transformers.WhisperTimeStampLogitsProcessor.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/logits_process.py#L1992[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
  search or log softmax for each vocabulary token when using beam search0`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`The processed prediction scores.

**Parameters:**

generate_config (`GenerateConfig`) : The generate config used to generate the output. The following parameters are required: eos_token_id (`int`, *optional*, defaults to 50257): The id of the *end-of-sequence* token. no_timestamps_token_id (`int`, *optional*, defaults to 50363): The id of the `""` token. max_initial_timestamp_index (`int`, *optional*, defaults to 1): Used to set the maximum value of the initial timestamp. This is used to prevent the model from predicting timestamps that are too far in the future.

begin_index (`int`) : Token index of the first token that is generated by the model.

_detect_timestamp_from_logprob (`bool`, *optional*) : Whether timestamps can be predicted from logprobs over all timestamps.

**Returns:**

``torch.FloatTensor` of shape `(batch_size, config.vocab_size)``

The processed prediction scores.

## StoppingCriteria[[transformers.StoppingCriteria]]

可以使用[StoppingCriteria](/docs/transformers/v5.3.0/zh/internal/generation_utils#transformers.StoppingCriteria)来更改停止生成的时间（除了EOS token以外的方法）。请注意，这仅适用于我们的PyTorch实现。

#### transformers.StoppingCriteria[[transformers.StoppingCriteria]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/stopping_criteria.py#L45)

Abstract base class for all stopping criteria that can be applied during generation.

If your stopping criteria depends on the `scores` input, make sure you pass `return_dict_in_generate=True,
output_scores=True` to `generate`.

__call__transformers.StoppingCriteria.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/stopping_criteria.py#L52[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}, {"name": "**kwargs", "val": ""}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.

  Indices can be obtained using `AutoTokenizer`. See [PreTrainedTokenizer.encode()](/docs/transformers/v5.3.0/zh/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode) and
  [PreTrainedTokenizer.__call__()](/docs/transformers/v5.3.0/zh/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.__call__) for details.

  [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax
  or scores for each vocabulary token after SoftMax. If this stopping criteria depends on the `scores` input,
  make sure you pass `return_dict_in_generate=True, output_scores=True` to `generate`.
- **kwargs** (`dict[str, Any]`, *optional*) --
  Additional stopping criteria specific kwargs.0`torch.BoolTensor`. (`torch.BoolTensor` of shape `(batch_size, 1)`)`True` indicates we stop generation for a particular row.
`False` indicates we should continue.

**Parameters:**

input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`) : Indices of input sequence tokens in the vocabulary.  Indices can be obtained using `AutoTokenizer`. See [PreTrainedTokenizer.encode()](/docs/transformers/v5.3.0/zh/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode) and [PreTrainedTokenizer.__call__()](/docs/transformers/v5.3.0/zh/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.__call__) for details.  [What are input IDs?](../glossary#input-ids)

scores (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) : Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax. If this stopping criteria depends on the `scores` input, make sure you pass `return_dict_in_generate=True, output_scores=True` to `generate`.

kwargs (`dict[str, Any]`, *optional*) : Additional stopping criteria specific kwargs.

**Returns:**

``torch.BoolTensor`. (`torch.BoolTensor` of shape `(batch_size, 1)`)`

`True` indicates we stop generation for a particular row.
`False` indicates we should continue.

#### transformers.StoppingCriteriaList[[transformers.StoppingCriteriaList]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/stopping_criteria.py#L495)

__call__transformers.StoppingCriteriaList.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/stopping_criteria.py#L496[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}, {"name": "**kwargs", "val": ""}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.

  Indices can be obtained using `AutoTokenizer`. See [PreTrainedTokenizer.encode()](/docs/transformers/v5.3.0/zh/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode) and
  [PreTrainedTokenizer.__call__()](/docs/transformers/v5.3.0/zh/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.__call__) for details.

  [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax
  or scores for each vocabulary token after SoftMax. If this stopping criteria depends on the `scores` input,
  make sure you pass `return_dict_in_generate=True, output_scores=True` to `generate`.
- **kwargs** (`dict[str, Any]`, *optional*) --
  Additional stopping criteria specific kwargs.0`torch.BoolTensor`. (`torch.BoolTensor` of shape `(batch_size, 1)`)`True` indicates we stop generation for a particular row.
`False` indicates we should continue.

**Parameters:**

input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`) : Indices of input sequence tokens in the vocabulary.  Indices can be obtained using `AutoTokenizer`. See [PreTrainedTokenizer.encode()](/docs/transformers/v5.3.0/zh/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode) and [PreTrainedTokenizer.__call__()](/docs/transformers/v5.3.0/zh/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.__call__) for details.  [What are input IDs?](../glossary#input-ids)

scores (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) : Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax. If this stopping criteria depends on the `scores` input, make sure you pass `return_dict_in_generate=True, output_scores=True` to `generate`.

kwargs (`dict[str, Any]`, *optional*) : Additional stopping criteria specific kwargs.

**Returns:**

``torch.BoolTensor`. (`torch.BoolTensor` of shape `(batch_size, 1)`)`

`True` indicates we stop generation for a particular row.
`False` indicates we should continue.

#### transformers.MaxLengthCriteria[[transformers.MaxLengthCriteria]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/stopping_criteria.py#L57)

This class can be used to stop generation whenever the full generated number of tokens exceeds `max_length`. Keep
in mind for decoder-only type of transformers, this will include the initial prompted tokens.

__call__transformers.MaxLengthCriteria.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/stopping_criteria.py#L73[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}, {"name": "**kwargs", "val": ""}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.

  Indices can be obtained using `AutoTokenizer`. See [PreTrainedTokenizer.encode()](/docs/transformers/v5.3.0/zh/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode) and
  [PreTrainedTokenizer.__call__()](/docs/transformers/v5.3.0/zh/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.__call__) for details.

  [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax
  or scores for each vocabulary token after SoftMax. If this stopping criteria depends on the `scores` input,
  make sure you pass `return_dict_in_generate=True, output_scores=True` to `generate`.
- **kwargs** (`dict[str, Any]`, *optional*) --
  Additional stopping criteria specific kwargs.0`torch.BoolTensor`. (`torch.BoolTensor` of shape `(batch_size, 1)`)`True` indicates we stop generation for a particular row.
`False` indicates we should continue.

**Parameters:**

max_length (`int`) : The maximum length that the output sequence can have in number of tokens.

max_position_embeddings (`int`, *optional*) : The maximum model length, as defined by the model's `config.max_position_embeddings` attribute.

**Returns:**

``torch.BoolTensor`. (`torch.BoolTensor` of shape `(batch_size, 1)`)`

`True` indicates we stop generation for a particular row.
`False` indicates we should continue.

#### transformers.MaxTimeCriteria[[transformers.MaxTimeCriteria]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/stopping_criteria.py#L86)

This class can be used to stop generation whenever the full generation exceeds some amount of time. By default, the
time will start being counted when you initialize this function. You can override this by passing an
`initial_time`.

__call__transformers.MaxTimeCriteria.__call__https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/stopping_criteria.py#L103[{"name": "input_ids", "val": ": LongTensor"}, {"name": "scores", "val": ": FloatTensor"}, {"name": "**kwargs", "val": ""}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`) --
  Indices of input sequence tokens in the vocabulary.

  Indices can be obtained using `AutoTokenizer`. See [PreTrainedTokenizer.encode()](/docs/transformers/v5.3.0/zh/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode) and
  [PreTrainedTokenizer.__call__()](/docs/transformers/v5.3.0/zh/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.__call__) for details.

  [What are input IDs?](../glossary#input-ids)
- **scores** (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`) --
  Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax
  or scores for each vocabulary token after SoftMax. If this stopping criteria depends on the `scores` input,
  make sure you pass `return_dict_in_generate=True, output_scores=True` to `generate`.
- **kwargs** (`dict[str, Any]`, *optional*) --
  Additional stopping criteria specific kwargs.0`torch.BoolTensor`. (`torch.BoolTensor` of shape `(batch_size, 1)`)`True` indicates we stop generation for a particular row.
`False` indicates we should continue.

**Parameters:**

max_time (`float`) : The maximum allowed time in seconds for the generation.

initial_time (`float`, *optional*, defaults to `time.time()`) : The start of the generation allowed time.

**Returns:**

``torch.BoolTensor`. (`torch.BoolTensor` of shape `(batch_size, 1)`)`

`True` indicates we stop generation for a particular row.
`False` indicates we should continue.

## Streamers[[transformers.TextStreamer]]

#### transformers.TextStreamer[[transformers.TextStreamer]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/streamers.py#L40)

Simple text streamer that prints the token(s) to stdout as soon as entire words are formed.

The API for the streamer classes is still under development and may change in the future.

Examples:

```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer

>>> tok = AutoTokenizer.from_pretrained("openai-community/gpt2")
>>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
>>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt")
>>> streamer = TextStreamer(tok)

>>> # Despite returning the usual output, the streamer will also print the generated text to stdout.
>>> _ = model.generate(**inputs, streamer=streamer, max_new_tokens=20)
An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven,
```

endtransformers.TextStreamer.endhttps://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/streamers.py#L118[]
Flushes any remaining cache and prints a newline to stdout.

**Parameters:**

tokenizer (`AutoTokenizer`) : The tokenized used to decode the tokens.

skip_prompt (`bool`, *optional*, defaults to `False`) : Whether to skip the prompt to `.generate()` or not. Useful e.g. for chatbots.

decode_kwargs (`dict`, *optional*) : Additional keyword arguments to pass to the tokenizer's `decode` method.
#### on_finalized_text[[transformers.TextStreamer.on_finalized_text]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/streamers.py#L132)

Prints the new text to stdout. If the stream is ending, also prints a newline.
#### put[[transformers.TextStreamer.put]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/streamers.py#L84)

Receives tokens, decodes them, and prints them to stdout as soon as they form entire words.

#### transformers.TextIteratorStreamer[[transformers.TextIteratorStreamer]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/streamers.py#L161)

Streamer that stores print-ready text in a queue, to be used by a downstream application as an iterator. This is
useful for applications that benefit from accessing the generated text in a non-blocking way (e.g. in an interactive
Gradio demo).

The API for the streamer classes is still under development and may change in the future.

Examples:

```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
>>> from threading import Thread

>>> tok = AutoTokenizer.from_pretrained("openai-community/gpt2")
>>> model = AutoModelForCausalLM.from_pretrained("openai-community/gpt2")
>>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt")
>>> streamer = TextIteratorStreamer(tok)

>>> # Run the generation in a separate thread, so that we can fetch the generated text in a non-blocking way.
>>> generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=20)
>>> thread = Thread(target=model.generate, kwargs=generation_kwargs)
>>> thread.start()
>>> generated_text = ""
>>> for new_text in streamer:
...     generated_text += new_text
>>> generated_text
'An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven,'
```

on_finalized_texttransformers.TextIteratorStreamer.on_finalized_texthttps://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/generation/streamers.py#L215[{"name": "text", "val": ": str"}, {"name": "stream_end", "val": ": bool = False"}]
Put the new text in the queue. If the stream is ending, also put a stop signal in the queue.

**Parameters:**

tokenizer (`AutoTokenizer`) : The tokenized used to decode the tokens.

skip_prompt (`bool`, *optional*, defaults to `False`) : Whether to skip the prompt to `.generate()` or not. Useful e.g. for chatbots.

timeout (`float`, *optional*) : The timeout for the text queue. If `None`, the queue will block indefinitely. Useful to handle exceptions in `.generate()`, when it is called in a separate thread.

decode_kwargs (`dict`, *optional*) : Additional keyword arguments to pass to the tokenizer's `decode` method.

